A Blog by Jonathan Low

 

Feb 17, 2020

The Downside of AI Diagnosis By Smartphone


Based on tech's history with personal information, the economic imperative to use such data in abusive ways is not just possible, but probable. JL


Lois Parshley reports in Vox:

Deducing details of a person’s health through how often they text is digital phenotyping. Phenotypes are traits derived from how your genes interact with your environment. These tools may predict illnesses before they would be diagnosed. (But) insurance companies could base rates on conditions you might not be aware of.  Could educational institutions access some of this behavioral health information admission? Employers (could) filter candidates based on what apps you use. “Are there repercussions they may not be aware of because their data is out there?”
It’s all too easy in these chaotic times to understand how someone with a stressful job might start feeling isolated at work, wrestle with anxiety, and develop insomnia. That’s how Katie, a young lawyer, found herself increasingly disconnected, spending her weekends in bed.
“She was just trying to get through the day,” says Caroline Ogilvy, a clinical independent social worker who met Katie when she came into her primary care office, an affiliate of Boston’s Brigham and Women’s Hospital, to get help with her depression. (Because of medical personal privacy laws, “Katie” is a pseudonym.) At the time, Ogilvy was recruiting patients for research the hospital was conducting with Companion MX, an app that uses data collected from cellphones to monitor patients’ mental health, and Katie’s symptoms made her eligible to participate.
Patients like Katie who used Companion MX had their location, screen time, and outgoing calls and texts tracked via their smartphones, in addition to filing mood logs through the app, which are analyzed using voice analysis. The app turned all this data into scores for mood, interest, social connections, and energy — categories that can be used to coach patients toward behavior changes.
Deducing details of a person’s health through how often they text or when they leave home is called digital phenotyping, and it’s a rapidly growing area of research. Phenotypes, generally speaking, are the traits, like behavior and appearance, derived from how your genes interact with your environment. Today, these environmental interactions happen not just in the physical world, but online, too.
Some researchers are now even using the term as a catch-all for the data people leave behind on the internet, social media, and other technology. The smartphones, Fitbits, sleep trackers, and other connected devices that constantly surround us generate an incredible amount of rich social and behavioral data. Jukka-Pekka Onnela, a network scientist at the Harvard T.H. Chan School of Public Health who has helped pioneer the study of using cellphone data for medical purposes, explains that something as simple as a text message can reveal a lot about someone’s health. “It’s like a micro-cognitive assessment. You have to have executive function, memory, linguistic function — it’s these little things that turn out to be incredibly informative about a person’s state.”
In the future, Onnela explains, digital phenotyping could help doctors diagnose mental illnesses — such as depression and anxiety — as well as a wide variety of others, including Parkinson’s disease. Scientists have analyzed how people use their phones to predict Parkinson’s disease with 100 percent accuracy. Technology may also provide new ways of monitoring and treating the disease: Another study found that accelerometer data from cellphones and smartwatches, which measure how your devices move through space, can estimate the severity of Parkinson’s tremors, helping patients understand their disease progression.
Many of these tools may soon be able to predict illnesses before they would otherwise be diagnosed. Take the language used in Facebook posts, which scientists believe can predict conditions as disparate as diabetes and depression. (Some of the links can seem bizarre: One study published in the scientific journal PLOS ONE found that people who used words such as “God” or “pray” in their posts were 15 times more likely to have diabetes — which may just mean language reveals geographic and economic differences, which have huge impacts on health.) Other researchers have run studies demonstrating how smartphones’ speakers and microphones can be used to detect with great accuracy important changes in breathing that occur before opioid overdose deaths — which might enable your phone to alert emergency services, even if you were unconscious.
But as digital phenotyping’s predictive ability and use broadens, it raises critical questions about whether it could consolidate information and power in the hands of those who already have it, and keep it from those who don’t.

Medical innovations are notoriously slow. It took more than 200 years for the thermometer to catch on. Even today, the average lag time between research discovery and adoption is 17 years.
But the public is hungry for information that was once the exclusive domain of doctors — demonstrated first by WebMD or Drugs.com, and then in a ballooning number of startups that hoped to help consumers more easily access their own health data, like ill-fated Theranos, which falsely claimed it could provide test results from tiny amounts of blood. More recent efforts include K, an app that lets you input symptoms and then uses artificial intelligence to give you a potential diagnosis, and the dystopian Hu-manity.co, which helps people sell their patient data directly to pharmaceutical companies, declaring: “Everyone has the right to legal ownership of their inherent human data as property.”
These innovations come at a time when public health crises may require rapid adaptation. In the 20 years that Companion MX’s co-founder, Carl Marci, has been a psychiatrist, rates of depression have risen dramatically. More than 13 percent of Americans age 18 to 25 have experienced major depressive episodes in the past year; suicides have also spiked in older adults, with rates in men between ages 45 and 64 alone rising 45 percent since 2000. Or take opioids: Since 1999, more than 400,000 Americans have died from opioid overdoses, about six times the number that died in the Vietnam War.

An illustration of a stethoscope attached to a cell phone. Illustration by Efi Chalikopoulou

In the midst of these health crises, Companion MX is using machine learning — computer programs that can learn patterns from datasets — to analyze voice recordings. This includes measuring factors that reveal “not what you say, but how you say it,” Marci explains, such as how fast someone’s talking, variations in their tone, and other signs of vocal stress in order to screen for depression.
Traditionally, doctors like Marci have had to rely on patients self-reporting their mental health symptoms during office visits. A new wave of apps, however, can monitor patients in real-time. For example, the startup Spire Health focuses on physical biomarkers such as pulse and activity rates, with a wearable it claims can forecast stress. Others, such as Mindstrong, a new Palo Alto-based company, tracks your cellphone and connects you with a therapist via text when it thinks you need it.
Ogilvy, for example, used Companion MX’s passive tracking of Katie’s phone to help her find ways of managing her depression. The independent record of her habits helped Katie realize the extent to which she’d been withdrawing on the weekends. Using the app, Ogilvy coached her on sleeping and eating patterns, helping her reintroduce regular exercise. “She was able to feel more in control,” Ogilvy says.
Ogilvy says this kind of personalized, data-driven health care is “the wave of the future.” But as digital phenotyping moves from proof-of-concept studies and clinical research to Silicon Valley startups, there are still big questions about its potential effects. At a time when people are growing cautious of how their personal information is used, Marci acknowledges, “Trust has been eroded. We have a challenge to convince people that we take security seriously.”
As new companies increasingly target sensitive medical information, it has raised ethical considerations about who owns and profits from health care data, and what regulations should be expanded to protect privacy and ensure equal access to treatment. There’s a larger philosophical question, too: Is it crazy to be constantly quantifying one’s well-being via a device that may be making us less healthy in the first place?
“It’s wonderful for research, but it’s just such a Wild West out there,” says Mona Sobhani, director of research at the University of Southern California Center for Body Computing, and a cognitive neuroscientist who studies technology trends like digital phenotyping. Sobhani talks about our digital footprints as if they were bread crumbs — the data sprinkled through everyday online interactions are by themselves meaningless, she says, “but in aggregate, you’re looking for relationships.”
“Everything in our lives [is] so tightly linked — our mental and physical health, our environment,” Sobhani says. “What worries me is making sure the right people have the data and are using it in a non-harmful way.”

What’s not at question is the general dissatisfaction with America’s health care. “The biggest secret of medicine is how poorly we understand people’s behaviors,” Onnela says. He explains that although technology has revolutionized certain areas of medicine, such as surgery, it’s still hard to study behavioral and social traits outside of the lab. “So we’re trying to bring the technology — the ability to measure — to where people actually live and experience their lives.”
That process is made easier by the fact that most Americans now rely on technology capable of monitoring them in uncountable ways. Transistors were invented to control electrical signals in 1947; smartphones and computers would be impossible without them. Until recently, the number of transistors that can fit on a chip has doubled approximately every two years, enabling smaller, cheaper digital sensors — and the era of big data. These changes are now so seamlessly woven into daily life that it’s hard to believe the iPhone is only 12 years old. Eighty-one percent of American adults now own a smartphone; as of 2014, there were more mobile phone subscriptions than people on the planet. For medical researchers, the ability to track behavior at this global scale, in natural, everyday environments, is unprecedented. For consumers, it’s also led to a revolution in understanding — or at least quantifying — our own behavior; apps that monitor everything from steps to sleep to period cycles have become incredibly popular.
Digital phenotyping can be broken into three general categories. Some researchers and companies are working on the ability to screen and diagnose patients. As of 2017, for example, Facebook has been analyzing posts to assess suicide risk, alerting law enforcement to those it deems at risk of imminent harm. Others are working on symptom monitoring; a recent study tracking schizophrenia patients’ cellphones found that their mobility patterns and social behavior changed before relapses. But the most crowded space is focused on interventions, like turning therapy into an app you can access from your phone. “Uniting all of those things is that we don’t really have an objective way of measuring mental health,” says Kit Huckvale, a research fellow in health informatics at the University of New South Wales in Sydney, who has extensively studied digital phenotyping.
Digital phenotyping might provide a new way to make that kind of measurement. For instance, Tom Insel, co-founder of Mindstrong and former director of the National Institute of Mental Health, and California’s “mental health czar,” explains that it’s possible for a computer to analyze how someone is feeling just from their quality of voice or their word choice. “It’s really important for people to understand that acquiring the data is the easy part,” Insel says. The challenge is to analyze the data in meaningful ways, parsing what is meaningful, and not “over-fitting,” or predicting connections between behaviors and illnesses where there are none.
The implications may be far-reaching — for example, helping providers understand when patients leave a doctor’s office still confused. Take, for instance, a study by researchers from the University of Pennsylvania and two other institutions, which got permission from participants to combine their search histories with their medical records. One patient, after being told they had a “walnut-sized fibrous tumor,” went home and searched “How big is a walnut?’ and “What is a fibrous tumor?” Overall, the study found that health-related searches doubled the week before patients visited an emergency department.
These new tools, Onnela says, “deployed at a population level” have the potential to combine precision medicine — the trend of tailoring treatment to an individual, considering their genetics and lifestyle — with public health. Onnela is one of the researchers working on the new Apple Women’s Health Study, announced in November 2019. It will track participants’ period cycles and conduct monthly surveys in order to understand the impact of women’s behaviors and habits on reproductive health. Onnela says it has the potential to become the largest and longest-running study on women’s health. “The scale is unprecedented.” (Apple is a partner with Vox Media on The Highlight.)
But other than emphasizing that participants give informed consent, Onnela said he was legally unable to share further details about the study’s design or goals. This kind of insistent secrecy is not unusual for Apple or the tech world, but it’s a larger problem with digital phenotyping: As health care applications emerge, health data will move from clinical settings to for-profit companies, and the systems that collect and distribute this data aren’t always — or even often — transparent about how it’s being used.
Consider Google’s recent interest in health care: Last fall, Google acquired activity- and sleep-tracking device manufacturer Fitbit in a $2.1 billion deal. Google also secretly launched a partnership with Ascension, a nonprofit health system, to store and analyze 50 million Americans’ medical records without their knowledge or consent. “As a data scientist,” says Sobhani, “that is a beautiful data set.” Even just knowing your ZIP code allows predictions of your life expectancy. “But all of the information from all of your behaviors from your Google account, with your medical record, tied to your Fitbit? Oh my God, I could predict so many things.” A whistleblower told the Guardian that the data is not being anonymized and is being transferred with personal details like names, medical history, lab results, and diagnoses.
Though Google insists that it will only use the data to “support improvements in clinical quality and patient safety,” what it will learn from the analysis has huge value. While your Fitbit might indicate behaviors that suggest a risk of cardiovascular disease, for machine learning to work, it has to know if you actually end up with a diagnosis. Thanks to Ascension, Google can now train its AI to comb through the data and find these connections.
Google has already expressed interest in being able to infer medical conditions; in 2018, the company filed a patent for a smart device that makes inferences based on what it sees in your home, sorting users into categories. It’s not a stretch to imagine that health insurers could someday raise your premiums based on your cooking habits or how much you drink. If you don’t want to share such private information, your decision may result in higher insurance rates — as Fitbit, which has sold more than 100 million devices, has already proved in lucrative deals with insurance companies. That’s one reason why Sobhani calls tech companies’ move into health “alarming.”
“These wearables are collecting health data, make no mistake, and they’re under no obligation from HIPAA or anything else to protect it,” she says. Generally, in the US, there are few regulations protecting consumers’ data. Federal laws such as HIPAA are much more restricted than commonly believed — they currently only apply to health care companies and systems. HIPAA also allows health care companies to share patient data with third parties, as long as the data is being used to help “carry out its health care functions,” as Google’s Ascension deal does. Nor is it just Google; Facebook is rolling out a new Preventive Health tool this month that will give users personalized reminders about health care and local places to get a screening or a flu shot. (Facebook says the data will not be shared or sold to third parties.)
“There are a number of gray areas” in what should be considered health data, says Nicole Martinez-Martin, a professor of biomedical ethics at Stanford University. It’s not just data itself that needs protecting, but also what can be gleaned from it. “What kind of inferences can be made from the data that goes beyond what was in that record itself?”

Illustration of a heart with digital lines passing through it. Illustration by Efi Chalikopoulou

This isn’t a hypothetical future concern; 23andMe has just sold its users’ DNA data to a pharmaceutical company, which plans to use it for drug discovery. Spurred by the rise of genomic testing, Sens. Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) have drafted a “Protecting Personal Health Data Act.” But even if it passes, the bill has an exception for data “derived solely from other information that is not personal health data” — a gaping loophole, as data collected from phones or internet searches is not yet considered health data. Meanwhile, there are plenty of incentives for companies to pursue health data. The digital health care market is expected to reach $206 billion by next year.

Corporations already have access to all kinds of mundane online data, including browsing data and search terms that can provide sensitive health information. Ginni Manning, a playwright in the UK who tried unsuccessfully for seven years to have a child, experienced how this data can be used. “When [a pregnancy] starts to go wrong, you’re desperate for someone, somewhere, to say it’s fine,” she says, recalling her long strings of heart-broken symptom searches. “And of course, it’s not.” It was a haunting, horrible time, made worse by the fact that when she logged into Facebook, all of the ads had to do with babies and getting pregnant. Many of her friends lived abroad, and she needed their support, she explains, so she remained on the social networking site. After each miscarriage, Manning tried going into the ad choices and blocking pregnancy-related terms, but it didn’t seem to make a difference. Manning says she wishes there were guidelines about how online ads are targeted, or ways to opt out.
Currently, there aren’t. It’s common knowledge Google and Facebook use your internet history to sort you into ad brackets. (Facebook also tracks your movements, even if you turn off location permissions.) But more than 70 percent of smartphone apps also sell your data — which can be used not only to create a profile of your buying and spending habits, but also to expose things like whose house you spent the night at or what doctor’s office you’re visiting, revealing personal details of your behavior thousands of times a day — sometimes even when they explicitly say they don’t. British journalist Talia Shadwell found that after she forgot to enter a period in a period-tracking app, ads for baby clothes and prenatal vitamins started haunting her around the internet, “a bit like an overbearing relative,” she wrote in the Daily Mail. When Shadwell checked the app privacy settings, she found it promised not to share details “entered manually,” but the language “cleverly avoided ruling out sharing information about aggregated data.”
Given what we now know data analysis can reveal, all online data can be considered health data. Say you’re one of the 72 percent of adults who use the internet to research health concerns. Tim Libert, a computer scientist at the CyLab Security and Privacy Institute at Carnegie Mellon University who investigates privacy, explains that if you click on a web page — for instance, the Centers for Disease Control’s page on HIV — third-party requests are sent to at least four servers, including Google Analytics. These companies can correlate your visit to the HIV page with other activity from your IP address, correlations that can reveal your identity — and that you’re specifically interested in HIV. This isn’t a one-off example. Libert has written a program that tracks cookies, the code embedded in a browser that allows companies to follow individuals around the internet. In 2015, he analyzed 80,000 pages relating to common diseases and found that more than 91 percent tattled, contacting third parties in the US.
While this kind of tracking may be useful to medical researchers, Martinez-Martin, the Stanford biomedical ethics professor, says the risks extend far beyond commercialization. Insurance companies could base rates on your data, reacting to conditions that you might not even be aware of. Or, she asks, “Could educational institutions access some of this behavioral health information in deciding things like admission?” Employers already place job ads targeted to certain online profiles; it’s not a stretch to imagine them being able to filter potential candidates based on what apps you use and what that tells them about your maternity status. If you start excluding people from being able to even see certain opportunities, Martinez-Martin asks, “Are there repercussions for people that they may not be aware of because their data is out there?”
Luke Stark, a researcher at Microsoft studying the social, emotional, and ethical aspects of digital technologies, warns that “prognostic scores and algorithms can exert their own effects on care” — meaning what your phone might learn about you can essentially become self-fulling prophecies. (Stark notes his views don’t reflect the company’s.) A study out of the University of Michigan’s Stroke Program found that an incorrect calculation of stroke patients’ chances changed their odds of survival, as well as what kind of care they received. If someone is alerted that they might be at higher risk of suicide by a tracking app, for example, Martinez-Martin says, “There’s not enough research about what the psychological ramifications might be.”
Sobhani believes sensitivity will be required in the future. “If we don’t know what this data is going to tell us, what it’s going to suggest about future behaviors or health — it’s not fair to be labeled with it.”

The next phases of digital phenotyping will likely be even more sweeping, pulling from even more sources of data. “It would be very useful, especially in central nervous system disorders, to couple digital phenotyping data with genomic data,” says Onnela. “That seems like a very obvious thing to do.” This potentially means your genetic risk for disease could be compiled with your daily activities and environmental interactions into one database — which would be an incredibly powerful research tool, but one that also raises questions. Stark, for his part, thinks that technology will also be increasingly trained to understand how we’re feeling from both verbal and facial cues. But, he adds, “The more these very thorny problems are embedded in the way these technologies work, the more they’re going to start impacting ordinary people.”
“To see what’s in front of one’s nose,” George Orwell wrote, “is a struggle.” A recent poll by the Pew Research Center found that a majority of Americans report being concerned about the way their data is being used by companies. The same study found that 75 percent — with broad bipartisan support — think there should be more government regulation of how consumer digital data is used.
Researchers aren’t blind to these concerns. Onnela has taken steps to increase privacy in his studies, like adding noise to try to anonymize data, and using open-source code for additional transparency. Much of the clinical research on digital phenotyping has, in fact, taken privacy extremely seriously. But as this research starts to inform commercial products, a lack of regulations and the push for profit pose lots of questions for the future of digital phenotyping.
“Every technology has value, starting from the hammer,” Onnela says. “There are always going to be bad actors out there. The one thing we can all do is educate ourselves about the capabilities of the technology — but also about its inherent limitations.”

0 comments:

Post a Comment