Uncategorized —

Mindreading 101? Identifying images by watching the brain

Researchers have reconstructed simple images by tracking activity in the …

The advent of techniques like PET scans and functional MRI has enabled researchers to observe the brain in action with a precision that is unprecedented. One of the interesting aspects of these studies is that we can now actually perform a limited version of what might be called mind reading: identifying what's going on in the brain without having the owner of said brain describe it. In the latest development in the field of neuroimaging, researchers have watched the brain of someone watching an image, and were actually able to perform reasonable reconstructions of the image.

For those in the field, this might not come as a huge shock, as a lot of results had hinted at a direct correlation between visual field and neural activity. Researchers had already demonstrated that they are able to recognize which image a person was looking at when given a limited collection of pictures. But the new work also built on decades of research on animals, where specific aspects of a visual scene—a dark area in the lower left, motion toward the top—would each trigger activity in a limited subset of neurons. Separately, it has been determined that the visual cortex contains a rough map of the eye's retina, implying a degree of spatial organization.

Put all of that together, and you can draw a reasonable inference: features of what someone is looking at will correlate with activity in specific areas of the visual cortex. Others have drawn precisely that inference and met with moderate degrees of success; the authors of the new paper cite previous results where researchers have identified small (3 x 3 pixel) images with over 50 percent accuracy simply by following the activity of the visual cortex.



Several trials (top)
and an average (bottom) of
the name of the journal in
which the results were published.
copyright Elsevier

The new work significantly ups the ante by moving to 10 x 10 pixel black-and-white images, which are big enough to represent alphabetic characters. It also introduces a few methodological twists that improve its accuracy. The authors start out by showing subjects 10 x 10 images that represent random noise while using fMRI equipment to track activity in the visual cortex. After collecting a series of data, they set machine learning algorithms loose on the data; these recognized areas of the cortex that were consistently activated by elements in the image. Although this activity largely correlated with a retinal map, there were enough differences that the authors' method was more accurate.

Separately, they adopted what they termed a multiscale reconstruction. In addition to using only the highest resolution MRI images (resolution is measured in voxels, or volume-pixels), the authors created a series of multi-voxel images (one, two, and three voxel resolutions) that partially overlapped. This extracted far more spatial information from the fMRI data than would otherwise be available, and also increased the performance of their image reconstruction slightly.

The results of any individual trial were somewhat mixed, and varied from person to person. But, by averaging a series of trials derived from the same image (and using one of the individuals their software performed best at), the authors were able to make a striking reconstruction of the image that individual was viewing.

It's important to emphasize that this is nowhere close to the ability to reconstruct the sorts of complex, three-dimensional images our brain deals with as part of its daily activity. Still, the results confirm that we're on the right track when it comes to understanding the basics of how image processing is performed by the visual cortex.

Neuron, 2008. DOI: 10.1016/j.neuron.2008.11.004

Channel Ars Technica