A Blog by Jonathan Low

 

Mar 14, 2020

Why Computers May Never Be Able To Read Your Mind

Hollywood and the CIA may want it, but researchers say they are not even close. JL

Nicole Kobie reports in Wired:

Even decoding speech isn't easy. We simply don't have enough data. It's difficult to install brain implants, so it's not frequently done. Another reason this is difficult: our brains don't all respond the same. There's still fundamental knowledge of the brain that we need to have before any of this will work. Until then, we won't be able to read speech, let alone inner thoughts. "Even if we were perfectly able to distinguish words someone tries to say from brain signals, this is not even close to mind reading or thought reading,"
Edward Chang can't read your thoughts. Whenever the neuroscientist's lab at the University of California publishes a new piece of research, there's always a familiar refrain: that he's created "mind-reading technology" or can "read your thoughts". He's not alone, it's a phrase that follows much of the research into brain-computer interfaces and speech decoding.
And no wonder, when Elon Musk's startup Neuralink claims it will eventually enable "consensual telepathy" and Facebook – one of the funders of Chang's lab – said it wants to let people send messages by just thinking the words, rather than tapping them out on a phone, an example of a brain-computer interface (BCI).
But Chang isn't trying to read minds; he's decoding speech in people who otherwise can't speak. "We're not really talking about reading someone's thoughts," Chang says. "Every paper or project we've done has been focusing on understanding the basic science of how the brain controls our ability to speak and understand speech. But not what we're thinking, not inner thoughts." Such research would have significant ethical implications, but it's not really possible right now anyway – and may never be.
Even decoding speech isn't easy. His most recent paper, in Nature last year, aimed to translate brain signals produced by speech into words and sentences read aloud by a machine; the aim is to help people with diseases such as amyotrophic lateral sclerosis (ALS) – a progressive neurodegenerative disease that affects nerve cells in the brain and spinal cord. "The paper describes the ability to take brain activity in people who are speaking normally and use that to create speech synthesis – it's not reading someone's thoughts," he says. "It's just reading the signals that are speaking."
The technology worked — to an extent. Patients with electrodes embedded in their brains were read a question and spoke an answer. Chang's system could accurately decipher what they heard 76 per cent of the time and what they said 61 per cent of the time by looking at their motor cortex to see how the brain fired up to move their mouth and tongue. But there are caveats. The potential answers were limited to a selection, making the algorithm's job a bit easier. Plus, the patients were in hospital having brain scans for epilepsy, and could therefore speak normally; it's not clear how this translates to someone who can't speak at all.
"Our goal is to translate this technology to people who are paralysed," he says. "The big challenge is understanding somebody who's not speaking. How do you train an algorithm to do that?" It's one thing to train a model using someone you can ask to read out sentences; you scan their brain signals while they read out sentences. But how do you do that if someone can't speak?
Chang's lab is currently in the middle of a clinical trial attempting to address that "formidable challenge", but it's as yet unclear how speech signals change for those unable to speak, or if different areas of the brain need to be considered. "There are these fairly substantial issues that we have to address in terms of our scientific knowledge," he says.
Decoding such signals is challenging in part because of how little we understand about how our own brains work. And while systems can be more easily trained to move a cursor left or right, speech is complicated. "The main challenges are the huge vocabulary that characterise this task, the need of a very good signal quality — achieved only by very invasive technologies – and the lack of understanding on how speech is encoded in the brain," says David Valeriani of Harvard Medical School. "This latter aspect is a challenge across many BCI fields. We need to know how the brain works before being able to use it to control other technologies, such as a BCI."
And we simply don't have enough data, says Mariska van Steensel, assistant professor at UMC Utrecht. It's difficult to install brain implants, so it's not frequently done; Chang used epilepsy patients because they were already having implants to track their seizures. Sitting around waiting for a seizure to strike, a handful were willing to take part in his research out of boredom. "On these types of topics, the number of patients that are going to be implanted will stay low, because it is very difficult research and very time consuming," she says, noting that fewer than 30 people have been implanted with a BCI worldwide; her own work is based on two implants. "That is one of the reasons why progress is relatively slow," she added, suggesting a database of work could be brought together to help share information.
There's another reason this is difficult: our brains don't all respond the same. Van Steensel has two patients with implants, allowing them to make a mouse click with brain signals by thinking about moving their hands. In the first patient, with ALS, it worked perfectly. But it didn't in the second, a patient with a brain-stem stroke. "Her signals were different and less optimal for this to b e reliable," she says. "Even a single mouse click to get reliable in all situations… is already difficult."
This work is different than that of startups such as NextMind and CTRL-Labs that use external, non-invasive headsets to read brain signals, but they lack the precision of an implant. "If you stay outside a concert hall, you will hear a very distorted version of what's playing inside — this is one of the problems of non-invasive BCIs," says Ana Matran-Fernandez, artificial intelligence industry fellow at the University of Essex. "You will get an idea of the general tempo... of the piece that's being played, but you can't pinpoint specifically each of the instruments being played. This is the same with a BCI. At best, we will know which areas of the brain are the most active — playing louder, if you will — but we won't know why, and we don't necessarily know what that means for a specific person."
Still, tech industry efforts — including Neuralink and Facebook — aren't misplaced, says Chang, but they're addressing different problems. Those projects are looking at implant or headset technology, not the hard science that's required to make so-called mind reading possible. "I think it's important to have all of these things happening," he says. "My caveat is that's not the only part of making these things work. There's still fundamental knowledge of the brain that we need to have before any of this will work."
Until then, we won't be able to read speech, let alone inner thoughts. "Even if we were perfectly able to distinguish words someone tries to say from brain signals, this is not even close to mind reading or thought reading," van Steensel says. "We're only looking at the areas that are relevant for the motor aspects of speech production. We're not looking at thoughts — I don't even think that's possible."

0 comments:

Post a Comment