A Blog by Jonathan Low

 

Sep 23, 2019

The Reason No One Wants Tech Companies Involved In Their Healthcare

Virtually no one believes these companies have their best interests at heart. But that will not stop them from trying to gain access to what has been - so far - legally protected data. JL

Kirsten Ostherr reports in Slate:

Tech companies are betting that they can monetize our digital exhaust, even if it’s not directly health-related, through consumer data and artificial intelligence to create predictive, personalized health analytics. When (tech) combines non-health-related consumer data with medical data, it creates digital health profiles with no external validation of accuracy, without consumers’ consent or ability to opt out. All that data may not improve health. (And) it will increase surveillance, digital profiling, and manipulation in health care encounters. There is a lot of money to be made, with $3.65 trillion spent on health care in the US in 2018.
Could your Netflix viewing habits predict that you will develop inflammatory bowel disease? Might your use of religious language in Facebook posts signal that you have diabetes? Could Amazon’s Alexa start telling you when you are getting sick and offer to sell you medicines?
All of the big technology companies have been moving into health care recently, making investments that mobilize their vast troves of consumer data. Amazon is selling software that can mine patient records and is expanding Alexa’s health and wellness capabilities. Google is developing A.I.-powered voice recognition software called “Medical Digital Assist” to help doctors dictate medical records. Alphabet, Google’s parent company, has a partnership between Verily (the company’s life sciences arm) and Walgreens to monitor patient medication “adherence.” Apple has been steadily developing health and medical apps for its smartwatches that can integrate personal health tracking data with electronic medical record systems at partner hospitals. Microsoft is developing A.I. software for medical records through the Azure for Health cloud. Even Uber and Lyft are getting into the game with “non-emergency medical transport.”
These companies are betting that they can monetize our digital exhaust, even if it’s not directly health-related, through a new approach that uses consumer data and artificial intelligence to create predictive, personalized health analytics. But all that data may not actually improve our health. Instead, it will make us vulnerable to increased surveillance, digital profiling, and manipulation in the most intimate of settings: our health care encounters.
There is a lot of money to be made in this enormous market, with $3.65 trillion spent on health care in the United States in 2018. Both patients and providers are dissatisfied with the incumbents, and tech companies claim to have a competitive edge because they understand consumer behavior and have the data to prove it. The idea is that big tech datasets may be uniquely capable of revealing new insights about how human behavior affects health because consumers willingly spend so much time revealing ourselves on those platforms. For businesses in the health care industry, this sounds like a tantalizing opportunity to become more “patient-centered” by predicting individualized “risk scores” to guide personalized care (while also increasing revenue and lowering costs). But for patients, the benefits of handing over so much personal data are less clear.
The kinds of data that tech companies mine—about our purchases, our likes, what brings us pleasure, and what makes us rant—bear little resemblance to the information that medical doctors and researchers have traditionally used to diagnose disease. Unlike a blood glucose reading or an EKG, data from “metaclinical” sites like Facebook or Amazon capture what have been called the “social determinants of health.” This term is defined by the World Health Organization as “the conditions in which people are born, grow, work, live, and age, and the wider set of forces and systems shaping the conditions of daily life.” Social determinants have a stronger influence on health than clinical care or genetics. But they were not discussed much in medical settings—or in technology circles—until recently. That’s because it’s difficult to quantify, say, how lonely a person might be, and it’s even harder to develop medical or pharmacological interventions to help get them out of the house more often. So social determinants were left to medicine’s poorly funded cousins, public health and social work, to address.
But new incentives arising out of the 2008 financial collapse and the 2010 passage of Obamacare turned hospitals’ attention to social determinants of health. The Affordable Care Act established “value-based care” as the new benchmark for provider payment based on quality (i.e., patient outcomes) rather than quantity of care (goodbye fee-for-service). In order to get paid, health care providers would have to look beyond the walls of the clinic and consider how factors like neighborhood, education, food, income, discrimination, and stress might affect a patient’s health outcomes. For advocates of health care as a human right, this seemed like a promising development. For the data analytics industry, it seemed like a gold mine.
Enter artificial intelligence and machine learning for health care, the largest market for investment in the emerging A.I./ML business sector. Companies like Jvion combine data on “thousands of socioeconomic and behavioral factors” with an individual patient’s clinical history to predict and prevent illness. If you recently started buying sleeping pills, or paid a divorce attorney, or got a lot of speeding tickets, your “risk profile” would factor that in. Hospitals such as the Mayo Clinic, Intermountain Healthcare, and the Cleveland Clinic are purchasing access to “cognitive technologies” from Jvion, IBM, Google’s DeepMind Health, and others that promise to provide “prescriptive analytics for preventable harm,” manage patient risk trajectories, and recommend personalized interventions, such as monitoring emergency room “high utilizers” when they are not in the hospital to prevent them from returning to the ER, predicting opioid addiction, preventing heart attacks and stroke, and anticipating mental health–related admissions.
This almost sounds great, like futuristic patient-centered care where the algorithms sense and prevent impending illness before you ever actually get sick. But when social determinants of health include categories like race, housing, and financial history, the data are neither neutral nor objective, and they need to be interpreted carefully. Artificial intelligence/machine learning systems for health care that factor in race, for example, must heed the difference between “race” as a spurious medical classification and “racial discrimination,” a very real factor shaping health outcomes through structural and individual harms. Data brokers like LexisNexis and Acxiom already sell social determinants data to health care providers, including information on criminal records, online purchasing histories, retail loyalty programs, voter registration data, and more. Soon, your doctor may see your every indulgence at TGI Fridays displayed alongside your weight and blood pressure.
These same brokers played a critical role in targeting ads to Facebook users, but now Facebook appears to be looking to bring its own trove of social determinants of health data to health care. An opinion piece published in the Journal of the
American Medical Association in January maps out Facebook’s vision for deploying its data mining business in health care. Lead author, medical doctor, and Facebook head of health research Freddy Abnousi and his collaborators argued that health researchers need to pay more attention to social determinants of health data from social networks and combine that information with health records to improve patient outcomes.
The authors do not explicitly pitch their vision as a Facebook data-mining endeavor, but they imagine a “granular tech-influenced definition” of social determinants of health that includes “numbers of online friends” as well as “complex social biomarkers, such as timing, frequency, content, and patterns of posts and degree of integration with online communities” drawn from “millions of users.” They urge readers to imagine the “richness of possible connections that can be explored with machine learning and other evolving ‘big data’ methodologies,” providing examples of topics that Facebook has already begun to explore: suicide prevention, opioid addiction, and cardiovascular health.
Advocates of patient data privacy rights are not pleased. Facebook’s earlier efforts to combine social and medical data without patients’ consent (also led by Abnousi) became public at the same time as the Cambridge Analytica breach, leading to negative public perception and the halting of the project. In responses to the JAMA piece, patient communities were also critical, pointing to the erosion of trust that these users feel toward the platform. Recent responses to the Facebook Federal Trade Commission ruling on privacy—and specifically, that ruling’s failure to provide special protections for health data—have further undermined Facebook’s credibility and have led to a new complaint filed by the Electronic Privacy Information Center.
Facebook is not the only big tech player being criticized for dabbling in this territory. A complaint filed in June against Google and the University of Chicago Medical Center alleges that the medical center shared identifiable data from the electronic health records of thousands of patients who were treated at the hospital between 2009 and 2016. The complaint alleges that those records contained time stamps that, when combined with Google’s access to geolocation and other types of data, could easily reidentify a patient. Google researchers had already publicized this work in 2018, describing their methods for training their artificial intelligence/machine learning on “the entire [electronic health record], including free-text notes,” providing further support for the plaintiff’s complaint that privacy rules were violated when Google obtained these records.
When Google or Facebook combines its troves of non-health-related consumer data with highly sensitive medical data, it creates digital health profiles with no external validation of accuracy, without consumers’ consent or ability to opt out. As tech companies move into health care, these digital profiles will become part of our medical records, with the potential to shape the care we receive, the resources we can access, and the bill we pay at the end. A keen human interpreter of these profiles might provide nuanced, meaningful context, for instance, of a grocery shopping history full of processed food purchases in a low-income neighborhood. But an artificial intelligence program might simply classify those data as evidence of poor adherence to nutrition guidelines, leading to an increased risk score with associated penalties.
Patients would be wise to ask their doctors directly about what kinds of data mining and digital profiling their hospital is using to make treatment decisions. Ask if you can see your own profile. And don’t be surprised if your doctor asks in return how you enjoyed those jalapeño poppers you ordered from Grubhub for your latest binge session of Diagnosis on Netflix.

3 comments:

Healthcare services said...

These are some of the best tips to promote your healthcare organization. The security is the most prior thing to consider for your health firm.

Ahsan Tahir said...

I am agreed with these reasons why health care organization don't want to involve with tech companies? I've seen some of the best dialysis center in Karachi who are actually following a good SOP to improve their services.

Rub MD said...

Your blog is always a great source of information. Keep up the good work! I learned something new from your article. Thanks for sharing your knowledge with us.

Post a Comment