A Blog by Jonathan Low

 

Nov 20, 2017

An Algorithm That Can Tell You What You Learned Before You Fall Asleep

By measuring human brains' electrical activity - or the lack thereof. JL


Daniel Oberhaus reports in Motherboard:

Memory consolidation does indeed involve distinct patterns that can be seen in the brain’s electrical activity. An algorithm was able to effectively ‘read’ electrical activity from sleeping brains and determine what they were memorizing
We experience a lot of shit everyday, so it’s no surprise that we need a little time away from the sensory overload of daily life to process everything and save the important bits. Although most of us think about sleep as a period of rest, a mounting body of research actually characterizes it as an active brain state for consolidating memories.
While we sleep, our brains appear to engage in a memory consolidation process that involves reactivating the same neurons that were fired when the experience first happened. What is less certain, however, is whether individual memories leave distinct traces in our neural activity as they are consolidated.

According to research presented at the Society for Neuroscience conference in Washington DC, memory consolidation does indeed involve distinct patterns that can be seen in the brain’s electrical activity. Although these patterns might be invisible to a human, when the researchers fed the brain activity from sleeping subjects to a machine learning algorithm, it was able to determine what the subject had learned before falling asleep.
In other words, an algorithm was able to effectively ‘read’ electrical activity from sleeping brains and determine what they were memorizing as a result.
Now before we get carried away, this doesn’t mean that a computer can tell what you’re dreaming or determine the content of what is being memorized outside a controlled clinical environment. But according to Monika Schönauer, a sleep scientist at the University of Tübingen in Germany and the lead researcher on the project, this is the first baby step in that direction.
Earlier this year, Schönauer and her colleagues published a series of experiments in which 32 subjects studied photos of either a person or a house shortly before falling asleep. On one night the subjects were exposed to a set of 100 photos of houses 30 times, and on a second night the same subjects studied 100 photos of faces. Then subjects were hooked up to an electroencephalogram (EEG), a device that measures electrical activity in the brain, and spent a full night sleeping in the laboratory.
The experiment was meant to determine whether the electrical activity in the brain contains signatures unique to previously learned information. In other words, a brain that memorized pictures of a house should contain electrical patterns that are different from the brains that memorized pictures of a face. Past research has shown that humans retain information better when they go to sleep after learning, relative to staying awake. So it seemed plausible that if memory consolidation left content-specific traces during sleep, these would be strongest when that content was absorbed directly before sleep.
“Others before us have already shown that entire brain regions that were active during learning are also more active during sleep,” Schönauer told me in an email. “But we are the first to show patterns of brain activity that relate to the specific content of learning.”
The content-specific activity in the brain referenced by Schönauer is essentially invisible to a human observer. This is because the changes in EEG data are very subtle and only emerge when the data is considered in aggregate across multiple subjects, something that would be nearly impossible for a human to parse.
Still, based on Schönauer’s research it seems that memory consolidation happens during discrete moments in the night, instead of consolidation being an ongoing, continuous process. As Schönauer and her colleagues discovered, the nature of these changes is content specific, and these changes are distinct enough in aggregate that a machine learning algorithm could tell when one EEG corresponded to a subject that had memorized a picture of a house versus a subject that had memorized a face.


Although the reprocessing of specific information has been demonstrated in other animals, Schönauer and her colleagues are the first to demonstrate this same phenomenon in humans.
For now, the algorithms were only able to tell whether a subject had studied a house or a face, but not which particular house or face they had studied. According to Schönauer, this is because the way a brain processes a house or face is “significantly different,” whereas the fine-grained detail and knowledge about the brain that would be necessary to differentiate which house or face a person had memorized just isn’t there yet.
“The field is a long way away from discriminating which single faces or objects have been viewed, and that is particularly true when talking about recordings of electrical brain activity [as opposed to fMRI],” Schönauer told me. “However, in the future, I am sure that it will become possible to differentiate what someone has learned before in a more fine-grained fashion, but we are far from there yet.”

0 comments:

Post a Comment