Your brain is a prediction machine that’s always on.

Summary: The brain constantly acts as a prediction machine, constantly comparing sensory information to internal predictions.

Source: Max Planck Institute

This is in line with a recent theory about how our brain works: it’s a prediction machine, constantly comparing the sensory information we pick up (like images, sounds and language) with internal predictions.

“This theoretical idea is extremely popular in neuroscience, but the existing evidence is often indirect and limited to artificial situations,” says lead author Micha Heilbron.

“I would really like to understand precisely how it works and test it in different situations.”

Brain research on this phenomenon is usually done in an artificial setting, Heilbron reveals. To conjure up predictions, participants are asked to stare at a single moving dot pattern for half an hour or listen to simple patterns in sounds like “beep beep boop, beep beep boop”.

“Studies like this indeed reveal that our brain can make predictions, but not that it always happens so in the complexity of everyday life. We’re trying to get it out of the lab. We are studying the same type of phenomenon, how the brain processes unexpected information, but then in much less predictable natural situations.

Hemingway and Holmes

The researchers analyzed the brain activity of people listening to stories of Hemingway or Sherlock Holmes. At the same time, they analyzed the texts of the books using computer models, called deep neural networks. This way they were able to calculate for each word how unpredictable it was.

For each word or sound, the brain makes detailed statistical expectations and is extremely sensitive to the degree of unpredictability: the brain response is stronger each time a word is unexpected in the context.

Our brain is an always-on prediction machine. Credit: AI-generated illustration, via: DALL-E, OpenAi – Micha Heilbron

“In itself, this is not very surprising: after all, everyone knows that we can sometimes predict the language to come. For example, your brain sometimes automatically “fills in” the blanks and mentally completes someone else’s sentences, such as if they start speaking very slowly, stutter, or are unable to think of a word. But what we’ve shown here is that it happens all the time. Our brain is constantly guessing words; the predictive machinery is always activated.

More than software

“In fact, our brain does something comparable to voice recognition software. Voice recognition using artificial intelligence also make predictions all the time and let themselves be guided by their expectations, just like the auto-complete function of your phone.

“Nevertheless, we observed a big difference: brains not only predict words, but make predictions at many different levels, from abstract meaning and grammar to specific sounds.”

There are good reasons for the continued interest of tech companies wanting to use new insights like this to create better language and image recognition software, for example. But these types of applications are not Heilbron’s primary focus.

“I would really like to understand how our predictive machinery works at a fundamental level. I am now working with the same research device, but for visual and auditory perceptions, such as music.

About this neuroscience research news

Author: Press office
Source: Max Planck Institute
Contact: Press Office – Max Planck Institute
Image: Image is credited to DALL-E, OpenAi – Micha Heilbron

Original research: Access closed.
“A Hierarchy of Linguistic Predictions in Natural Language Understanding” by Micha Heilbron et al. PNAS

See also

It shows a woman stretching to run

Summary

A hierarchy of linguistic predictions when understanding natural language

Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming inputs.

However, the role of prediction in language processing remains contested, with disagreement over both the ubiquity and representative nature of predictions.

Here, we address both issues by analyzing brain recordings of participants listening to audiobooks and using a deep neural network (GPT-2) to accurately quantify contextual predictions.

First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions in distinct dimensions, revealing neural signatures dissociable from predictions on syntactic category (parts of speech), phonemes, and semantics.

Finally, we show that high-level predictions (words) inform low-level predictions (phonemes), supporting hierarchical predictive processing.

Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts forthcoming language at multiple levels of abstraction.

Leave a Comment