Can AI really recreate what we see from brain scans?

If you saw the images earlier this month from Science magazine you were probably stunned by the claim that AI had successfully recreated images from scans of test subjects’ brains.  We’ve held back on writing about this seemingly miraculous feat  because we don’t want to add to AI hype, but so many people have asked us about it that it’s time to give our perspective. 

The research from Osaka University used data from four people’s brain scans (so note this is a very small set of test subjects).   The test subjects had viewed up to 10,000 images (each labelled with key words to create a labelled dataset) whilst an fMRI scanner captured data about changes in blood flow to active regions of their brains.  The AI model was built on this data, then tested on results from the same test subjects that had not been used to build the model.  In testing the model predicted what label to apply and fed the keywords into a text-to-image generator which then generated an image to fit the label.  

So this isn’t a machine that can read our thoughts and construct images from our brainwaves.  This is more like a clever guessing game over two stages: first the model predicts what image the blood flow pattern suggests the test subject was looking at; second the key words associated with the predicted image are descriptive enough to provide a good “prompt” to the text-to-image generator. 

Where next?

It is still impressive and, if the method was shown to work beyond just the four people in this study, it could be helpful for people with brain injuries or speech difficulties.  

Whilst AI can analyse brain scans and identify certain patterns it cannot fully understand the nuances which are so important to the way we perceive the world around us. This is because human perception is not just based on patterns, but also on experience, context and emotion. We recognise objects and patterns based on our knowledge of the world, which allows us to make sense of complex situations. Crucially AI lacks this contextual knowledge, which means that it is limited in its ability to replicate how we perceive the world.

Are brain-computer interfaces the future?

Despite the limitations on AI in replicating human perception, AI can help us gain insights into how the brain works by analysing brain scans and identifying patterns, This can lead to significant advancements in fields such as neuroscience, psychology, and psychiatry.

Companies like Neuralink are pushing ahead with developing brain-computer interfaces (BCIs), which are devices that allow the brain to communicate directly with computers. BCIs have the potential to revolutionise healthcare by providing new treatments for conditions such as paralysis, dementia and depression.

The Osaka University research, and other AI methods, could in future be combined with BCIs to analyse brain scans and identify the patterns associated with certain conditions. This in turn may lead to alternative ways for people to communicate, use their bodies and use tools to overcome conditions or harness the power of devices like exoskeletons.

We know that AI has come a long way in recent years but it still has limitations in replicating human perception. While AI can analyse brain scans and identify patterns it does not yet have the ability to construct images directly from our brainwaves. Beyond the hype, AI does have a key role to play in understanding the brain and developing new treatments. As AI continues to evolve, we expect to see significant advancements in these fields, but keep being cautious about stories that look too good to be true.

You may also like...