Researchers at the University of California at Berkeley (USA) have decoded and reconstructed the dynamic visual experience, in this case, using Hollywood movie trailers, getting a cutting-edge mix of brain imaging and computer simulation through which have glimpsed futuristic scenarios that, perhaps, allow in the long run to observe the images of a patient’s mind in a coma, or to see a dream of his own on YouTube. The study was published in the journal Current Biology.
Using functional magnetic resonance imaging (fMRI) and computer models, Berkeley researchers have been successful in decoding and reconstructing dynamic visual experiences, in this case, while the subjects in the studio were watching ‘trailer’ for Hollywood movies.
So far, the only available technology was only able to reconstruct moving images that people had already seen. However, this breakthrough opens the way for the reproduction of movies inside our head that no one else sees, like dreams and memories. “This is a major step toward rebuilding internal images,” says Professor Jack Gallant, a neuroscientist at the University of Berkeley and co-author of the study.
Over time, practical applications of this technology could include a better understanding of what goes on in the minds of people who can not communicate verbally, such as stroke victims, patients in coma, and people with neurodegenerative diseases. It can also lay the foundation for the development of the brain-machine interface, so that people with cerebral palsy, for example, can guide computers with their minds. However, researchers point out that technology is decades away from allowing users to read other people’s thoughts and intentions.
Previously, Gallant and his colleagues recorded brain activity in the visual cortex, while a subject observed black and white photographs, building a computational model that allowed them to predict with overwhelming accuracy the image the person was looking at.
In this latest experiment, researchers have solved a much more difficult problem by decoding brain signals generated by moving images. “Our natural visual experience is like watching a movie,” explains Shinji Nishimoto, lead author of the study and researcher at Gallant’s lab, “for this technology to have wide application, we need to understand how the brain processes dynamic visual experiences”.
Nishimoto and two other members of the research team served as subjects for the experiment, as the procedure requires volunteers to remain immobile within the scanner for hours. They viewed two separate sets of Hollywood movie trailers, while fMRI measured blood flow from the visual cortex – the part of the brain that processes visual information. In the computer, the brain was divided into small three-dimensional cubes known as volumetric pixels, or ‘voxels’. “We have built a model for each voxel that describes how motion information is formed in the movie assigned to brain activity,” explains Nishimoto.
The recorded brain activity as subjects viewed the first set of clips was introduced into a computer program, second to second, to associate visual patterns of the film with corresponding brain activity.
This Website Shows How Long A Hacker Would Take To Hack Your Password
Similarly, the brain activity evoked by the second set of clips was used to test the algorithm for reconstructing the film. This was accomplished by introducing 18 million seconds of YouTube videos at random into the computer program so that it could predict the brain activity that each movie clip conjured up.
Finally, the 100 clips that the computer program decided were the most similar to the clip the subject had seen, merged to produce a blurred but continuous reconstruction of the original film.
This Teenager Hacked 4G LTE Network And Is Using Free Internet
Reconstruction of films with brain imaging has been challenging because changes in blood flow measured with fMRI are much slower than the neuronal signals encoding dynamic information in films. For this reason, most previous attempts to decode brain activity have focused on static images.
“We have addressed this problem by developing a two-stage model that separately describes the underlying neural population and blood flow signals,” says Nishimoto, adding that “scientists need to understand how the brain processes the dynamics of events visual we experience in everyday life, but for that we have to first understand how the brain works while we are watching movies.”
So, if you liked this article then simply do not forget to share this article with your friends and family.