A Blog by Jonathan Low

 

Apr 25, 2023

Can AI-Enabled Facial Recognition Really Identify Diseases In Humans?

It is theoretically possible. The challenge is that many diseases have multiple causes as well as indicators, so AI is most likely to be a supplement or tool to be used by doctors in determining both the actual condition and the best means of remedying it. JL 

Eric Niiler reports in the Wall Street Journal:

Johns Hopkins is training an algorithm to recognize changes in patients’ features that might indicate damage to the brain from a stroke as opposed to seizures or migraines. "We could measure, then leverage analytical techniques and AI to process information and generate insights. The most successful applications of AI in medicine are when a physician uses an AI software program that can interpret images and the physician can agree or disagree with the program. The AI acts as a backup for the doctor’s diagnosis. As AI tackles health conditions that have multiple causes computer scientists will have to work with doctors to explain how the AI makes its decisions that lead to its diagnosis

Patients at Johns Hopkins Hospital who are suspected of having a stroke might get an unusual request from physicians: Can we film your face? The doctors’ goal is to identify stroke patients by facial characteristics instead of waiting for brain scans or blood tests, helping speed both treatment and recovery.

The Johns Hopkins team is training a computer algorithm to recognize changes in the patients’ features, such as the paralysis of certain facial muscles or unusual eye movements, that might indicate damage to the brain from a stroke as opposed to seizures, severe migraines or anxiety disorders. 

“The face is probably one of the most sophisticated signaling systems in the universe,” says Robert David Stevens, director of precision medicine and chief of the division of informatics, integration and innovation at the Johns Hopkins School of Medicine. “Maybe we could actually measure what’s happening and then leverage advanced analytical techniques and artificial intelligence to process large amounts of information and generate new insights.”

Meanwhile, other researchers at the Massachusetts Institute of Technology are looking at facial recognition to diagnose the progression of amyotrophic lateral sclerosis, or ALS, a degenerative nerve disease that affects the muscles. And a Florida-based startup has developed a tool to help pediatricians diagnose rare genetic conditions by analyzing images of children’s facial features.

 

Some medical experts say that these technologies won’t be ready for widespread use until doctors and their patients can assess how facial recognition algorithms make decisions with patient data so that humans can better trust their outcomes.

Early research efforts point to a future in which facial scans, perhaps embedded in a smartphone camera or even a bathroom mirror, might monitor our general health while picking up signs of long-term neurological ailments such as dementia. Some researchers believe algorithms might even be used to track how well a treatment or drug is working by detecting changes in a person’s face.

“The problem is getting people to act on the data and trust the data,” says Ken Stein, chief medical officer of Boston Scientific, a biomedical firm which uses AI algorithms in its heart monitors to predict the risk of heart failure in some patients. 

So far, the most successful applications of artificial intelligence in medicine are when a physician uses an AI software program that can interpret images—say X-rays or other kinds of scans—and the physician can say immediately whether they agree or disagree with the program, according to Dr. Stein. In those cases, the AI acts as a backup for the doctor’s diagnosis.

“If you do a whole bunch of them, you learn whether or not you can trust it,” Dr. Stein says about the X-ray image analysis.

As AI tackles health conditions that have multiple causes — such as heart disease, cancer or dementia — computer scientists who develop the algorithms will have to work closely with doctors to explain how the AI makes its decisions that lead to its diagnosis, he notes.  

First developed in the early 1970s, facial-recognition technology took off in the early 1990s when a team at MIT translated facial images into a series of numbers that could be understood by a computer. In recent decades, Pentagon-funded research to improve facial-recognition technology has been widely adopted by police to identify criminal suspects. However, civil-rights groups have raised concerns that some facial recognition programs are biased because they have been less accurate in identifying individuals with darker skin, leading to false arrests. Facebook shut down its facial-recognition program in 2021 citing concerns over users’ privacy

Dr. Robert D. Stevens of Johns Hopkins School of Medicine. PHOTO: JOHNS HOPKINS UNIVERSITY

Despite these concerns, researchers are hoping to use artificial intelligence to identify early signs of the risk of stroke and other neurological conditions before they happen, and diagnose the event after it has occurred.

When a patient has a stroke, blood flow to the brain is blocked, which can destroy or damage areas of the brain, including those that control memory, speech and various facial muscles. 

“Can we leverage the face as a sort of decodable window on what’s happening inside the body?” says Dr. Stevens.

In the Johns Hopkins study, researchers take video images when patients who are either already admitted to the hospital or just arriving are suspected of experiencing a stroke.

The videos are uploaded to a database that is used to train the algorithm. Researchers have enrolled about 120 of a planned 400 patients in the preliminary study and hope to train the stroke detection algorithm to improve its accuracy. In a preliminary study of 40 patients who had already been diagnosed by a physician, the algorithm was 70 percent accurate in diagnosing whether or not a patient had a stroke. 

SHARE YOUR THOUGHTS

How would you feel about your physician using facial-recognition algorithms for diagnosis?

Dr. Stevens says the team is also examining a person’s vital signs such as blood pressure and heart rate by analyzing how a directed source of light reflects off the skin of their faces, which varies slightly depending on blood flow beneath the skin’s surface.

“Everybody’s face is actually oscillating in color very imperceptibly that can be detected with a camera,” says Dr. Stevens. “Using a very simple algorithm you can infer the heart rates, you can infer how regular that heart rate is, the oxygen level in the blood, and even infer the blood pressure.” 

Florida-based biotech firm FDNA has developed a software program that aims to use facial recognition to diagnose rare genetic conditions in young children. The Face2Gene platform allows a doctor to upload scans of a patient’s face to a smartphone app and then get a recommendation on whether the image might indicate one of 1,500 conditions or syndromes associated with facial features. The platform has 47,000 users, including geneticists, neurologists, pediatric specialists and researchers. The benefit is early detection, according to FDNA spokesman Erik Feingold.

Advertisement - Scroll to Continue

In Boston, researchers at Massachusetts General Hospital and MIT are using facial recognition to identify and track ALS, a progressive neurodegenerative disease that affects nerve cells in the brain and spinal cord, which leads to deterioration of muscles affecting movement, speech and eventually breathing. 

The team is working with EverythingALS, a nonprofit patient group that is part of a foundation set up to speed methods of diagnosis and potential cures for the disease. The group is the brainchild of Indu Navar, a tech entrepreneur whose pursuit of a faster ALS diagnosis is a personal one.

Back in 2016, her husband Peter Cohen, a former Amazon executive, felt his ankle getting weak and had some difficulty walking. A chiropractor told him to see a neurologist, who told him to wait and see if it went away, or was the result of a viral infection.

“It took us two years to get diagnosed,” says Ms. Navar. “He continued to deteriorate, and we were told ‘let’s wait and see.’’’

Mr. Cohen was finally diagnosed with ALS and died in 2019 at the age of 52, says Ms. Navar. 

In the last weeks of his life, Ms. Navar and her husband talked about how to improve ALS diagnosis through imaging technology and artificial intelligence. She even filmed him walking in hopes of understanding the disease’s progression. 

“We want to find better ways to measure symptoms and better ways to tell if a drug is working,” said Ernest Fraenkel, professor of biological engineering at MIT who is working with EverythingALS.

Dr. Fraenkel and his colleagues have developed an algorithm to analyze video of ALS patients to track facial movements, measure the space between the lips (an early indicator of possible diagnosis) and changes in speech patterns. The group has recruited 1,000 volunteers in the past 18 months. The team is trying to determine if it can tell whether new ALS drugs in clinical trials are working or not.

Despite the promising early results, the use of artificial intelligence today is more of a tool than a cure, Dr. Fraenkel says. “Early diagnosis is tough, but there is strong evidence that it is going to work eventually.”

Corrections & Amplifications

A team that has developed an algorithm to analyze video of ALS patients is trying to determine if it can tell whether new ALS drugs in clinical trials are working or not. An earlier version of the article incorrectly said the team is trying to determine if it can tell whether one of the three drugs approved for treating ALS symptoms is working or not.

0 comments:

Post a Comment