Scientists have built a device that can extract rough images out of a human brain. Reporting on the breakthrough, Brian Resnick of Vox went so far as to dub it a "mind-reading machine," while admitting that "it doesn't work all that well."

Further reading shows that the machine is nothing so powerful as Cerebro, the telepathy device used by Professor X in the X-Men movies. That's science fiction. In real life, what exists is a system devised by Kuhl Lab researchers that consists of an fMRI scanner hooked up to an artificial intelligence program.

In a Journal of Neuroscience report, Brice A. Kuhl and Hongmi Lee described their lab's ambitious experimental setup. It involved 23 study participants who underwent fMRI scans of their brains. Magnetic resonance imaging allowed the neuroscientists to map miniscule changes in blood flow that signified mental activity.

The participants' fMRI scans were taken as they viewed a thousand color photographs, each one depicting a different human face. The MRI reading was entered into the artificial intelligence (AI) program. This provided the AI with a database of correspondences between the mental activity patterns and the facial images.

The AI also received additional data in the form of mathematical "descriptions" of each facial image. The Kuhl Lab team assigned numbers to 300 different facial characteristics, formulating a code which the AI could process. Each photo was given a unique code that reflected the specific physical features of the face depicted.

As the AI collected more and more data with each volunteer, it became more capable of matching the brain activity scans with the descriptive facial codes. Then the AI was put to the real challenge.

The participants went through another set of fMRI scans, only this time they viewed photos of faces not previously shown to any of them. This brain scan data was sent to the machine without the coded descriptions. The AI then had to extrapolate the facial images based on the correspondences that it had learned.

The results were intriguing. The AI generated two "guesses" of what each new face looked like. One guess was based on fMRI scans of the angular gyrus, a section of the brain involved in memory processing. The second guess utilized scans of the occipitotemporal cortex, which processes visual input.

The AI's reconstructions were far from exact, but they did capture a good amount of useful information. The researchers presented the machine's guesses to a group of online survey respondents. To a degree greater than chance, the respondents picked out the right details about the facial images, correctly answering questions about gender, skin color, and mood.

If that's mind reading, then it's highly imperfect. But it suggests a possible path for artificial intelligence researchers to pursue, which may lead to future improvements in human-machine interfaces.

\