It may sound like an idea out of a bad science fiction movie, but according to research, scientists at UC Berkeley have designed a system that can decode, then recreate human perception. This means that imagery can be created ‘inside’ the minds of people.
Jack Gallant, a UC Berkeley Neuroscientist said in ‘Current Biology’ “This approach provides a platform to reconstruct the internal and dynamic brain thought processes.”
His blog page adds “The research program in my lab reflects a tight integration of three distinct approaches: neuroscience experiments involving both classical electrophysiology and functional neuroimaging (fMRI); statistical analysis using methods adapted from nonlinear system identification and nonlinear regression; and theoretical modeling. Much of our research uses modern statistical tools to fit quantitative computational models that describe how visual stimuli elicit brain activity.
Statistical tools drawn from classical and Bayesian statistics and machine learning are used to fit appropriate computational models to these data. The resulting models describe how each element of the visual system (e.g., a neuron, a voxel or an entire visual area) encodes information about the visual world. Models are evaluated both by statistical significance and by their ability to predict brain responses under new conditions. This second criterion, accurate prediction, is the gold standard of science and is fairly unique to our approach.”
According to the article and other reports, the imagery isn’t that clear or detailed, yet, looking more like a motion generated post impressionist painting.
The tacky 80’s sci fi flix ‘BrainStorm’ bears a resemblance to the same principle. Scientists sharing a person’s sensations so that others could experience them.
Lisa Krieger from the Mercury News added “In essence this project recorded blood activity in the brain of people watching video clips — then computers reconstructed what was watched. The scientists acted as subjects, sitting inside a functional magnetic resonance imaging, or fMRI, scanner for hours at a time. While they watched Hollywood movie trailers, the fMRI scanner measured blood flow through their visual cortex, the part of the brain that processes visual information.
This brain activity was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity. Then the images were reconstructed. This was done by feeding 5,000 hours of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
The computer matched the YouTube clips to the visual patterns created through brain activity — and produced a movie, albeit one that was blurry and distorted.”
Gallant claims that the early visual system is still very primitive and under development, but it can pick out backgrounds and big objects.
He said “That’s why eye witness testimony is so notoriously unreliable. It’s not like a videotape recording. What you retrieve and recreate is based on very little information, which is then filled in. It’s an artistic interpretation of what really happened.”
Kitguru says: What is the next stage?