Whenever I say anything is impossible, I always think of Arthur C. Clarke’s First Law: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
Up until recently, I would have said mind-reading was impossible…but, even though I am neither distinguished, elderly, nor a scientist, it’s beginning to appear as if it may not be impossible forever. Why? Because scientists have successfully reconstructed videos of what people have seen, simply by scanning their brain activity.
Sure the resulting video is extremely blurry, but like a singing dog, it’s not so much that it sings well as the fact it sings at all that is extraordinary.
The research, led by neuroscientist Jack Gallant at the University of California, Berkeley, follows up on previous research from 2008, when the same team reported it was able to use data from functional magnetic resonance imaging (fMRI) to determine, with 90-percent accuracy, which of a library of photographs a person was looking at while their brain activity was measured.
Reconstructing video, however, is much tougher. As Gallant points out, fMRI doesn’t measure brain activity directly; rather, it measures blood flow to active areas of the brain. Blood flow is much, much slower than the back-and-forth among neurons, and video, of course, is changing all the time, resulting in a lot of back-and-forth.
To get around that, Gallant and postdoctoral researcher Shinji Nishimoto designed a computer program that combined a model of thousands of virtual neurons with a model of how the activity of neurons affects blood flow in the brain. The program allowed them to translate the slow flow of blood into the much speedier flow of information among neurons.
Next, three volunteers, all neuroscientists involved in the project, watched hours of video clips while inside an fMRI machine. Gallant’s team built a “dictionary” that could successfully link the recorded brain activity with individual video clips. The computer learned, in other words, that a particular pattern on the screen corresponded to a particular pattern of brain activity.
With their dictionary in place, the researchers gave the computer nearly 18 million seconds of new clips randomly downloaded from YouTube, none of which the volunteers had ever seen. Then the volunteers’ brain activity was run through the model, which was told to pick the clips most likely to trigger each second of activity: in other words, to reconstruct the volunteers’ video experience using building blocks of random moving pictures.
Say the volunteer saw someone sitting on the left side of the screen. The computer would look at the brain activity, then go to its library of clips and find the ones most likely to trigger that particular pattern—most likely, videos of people sitting on the left side of the screen.
The final videos were recreated from an average of the top 100 clips the computer deemed closest based on the brain activity. The averaging was necessary because even 18 million seconds of YouTube video didn’t come close to capturing all the visual variety in the original movie clips. As you would expect, the results are blurry—but still recognizably in the right ballpark, especially in cases where there are people simply sitting and talking to the camera.
Presumably, the larger the library of clips the computer had to draw on, the closer it could come to capturing exactly what the individual had seen…offering the tantalizing possibility of someday accurately playing back another person’s visual experience.
The researchers hope to build models that mimic other brain areas, such as those related to thought and reason. The potential, however far down the road it may be, is for nothing less than machines that can read people’s thoughts, or even play back their dreams (opening up the fascinating possibility of professional dreamers, whose dreams would be so interesting people would pay to see them, or directors who would only have to vividly imagine their stories to see them on the screen, rather than go to all the trouble of actually filming them).
Sound impossible? Well, maybe. But remember Clarke’s Second Law: “The only way of discovering the limits of the possible is to venture a little way past them into the impossible.”
And if it all sounds like magic—which it certainly does—well, then, remember Clarke’s Third Law while you’re at it: “Any sufficiently advanced technology,” he said, “is indistinguishable from magic.”