A Blog by Jonathan Low

 

May 5, 2022

The Reason the Pandemic Made Algorithms Go Haywire

Algorithms work best at pattern recognition - and patterns in both health care, finance and consumer behavior were all upended by the pandemic.

In addition, outcomes changed as both health and economic circumstances were disrupted by the pandemic. The lesson is that society is not yet ready to turn decision-making over to algorithms without human oversight.  JL

Ravi Parikh and Amol Navathe report in Re/code:

Since COVID upended our lives, more algorithms have misfired, harming millions of Americans and widening existing financial and health disparities facing marginalized groups. Often it was because COVID changed life in a way that made the algorithms malfunction. An algorithm used by dozens of US hospitals (misinterpreted) many of the variables that went into the algorithm - oxygen levels, age, comorbidity conditions - (because they) were completely different during the pandemic. (And) one-third of banks reported their predictive algorithms became more inaccurate during the pandemic.

Algorithms have always had some trouble getting things right—hence the fact that ads often follow you around the internet for something you’ve already purchased.

But since COVID upended our lives, more of these algorithms have misfired, harming millions of Americans and widening existing financial and health disparities facing marginalized groups. At times, this was because we humans weren’t using the algorithms correctly. More often it was because COVID changed life in a way that made the algorithms malfunction.

Take, for instance, an algorithm used by dozens of hospitals in the U.S. to identify patients with sepsis—a life-threatening consequence of infection. It was supposed to help doctors speed up transfer to the intensive care unit. But starting in spring of 2020, the patients that showed up to the hospital suddenly changed due to COVID. Many of the variables that went into the algorithm—oxygen levels, age, comorbid conditions—were completely different during the pandemic. So the algorithm couldn’t effectively discern sicker from healthier patients, and consequently it flagged more than twice as many patients as “sick” even though hospital capacity was 35 percent lower than normal. The result was presumably more instances of doctors and nurses being summoned to the patient bedside. It’s possible all of these alerts were necessary – after all, more patients were sick. However, it’s also possible that many of these alerts were false alarms because the type of patients showing up to the hospital were different. Either way, this threatened to overwhelm physicians and hospitals. This “alert overload” was discovered months into the pandemic and led the University of Michigan health system to shut down its use of the algorithm.

 

We saw a similar issue first-hand in the hospital where we both work: We recently published a study examining a health care machine-learning algorithm used to identify the sickest of patients with cancer. Flagging them gives clinicians an opportunity to talk to them about their preferences for end-of-life care.  Our data showed that, during the pandemic, this algorithm was 30 percent less likely to correctly identify a sick patient who needed such a timely conversation. Missed end-of-life conversations often translate to unnecessary treatments, hospitalizations, and worse quality of life for individuals who would have instead benefited from early hospice care.

In another example, American Express designed a complex AI algorithm to detect fraud that had 30 percent better performance than its legacy algorithms. However, starting in March 2020, consumers made massive changes in spending patterns due to the pandemic, including larger purchases, more online orders, and many new customers showing up at department stores to buy items like toilet paper and hand sanitizer. Luckily, Amex  did some pre-rollout testing and found that this sea change would have triggered an inordinate number of fraud alerts, forcing the company to delay rollout of the algorithm by nearly a year.

The banking sector was the biggest investor in AI prior to the pandemic, as it may help to set more accurate mortgage or interest rates. However, patterns of in-person and online banking changed dramatically during the pandemic. In a Bank of England survey, more than one-third of banks reported that their predictive algorithms became more inaccurate during the pandemicThis has translated to an expected decrease in the pace of AI investment by banks.

How is it that COVID infected our algorithms? The answers are subtle, but offer important lessons since the COVID era will likely impact algorithms for years to come.

 

First, algorithms do best at pattern recognition. They are usually designed using years of historical data to predict outcomes in the future. However, nearly every input into AI algorithms changed during COVID. In health care, for example, cancer screenings, doctor’s visits, and elective surgeries declined dramatically and still haven’t fully recovered. A pre-COVID algorithm may have predicted that individuals who didn’t see the doctor too often were healthy. But during COVID, sicker patients often avoided the hospital or doctor’s office. Sometimes they got care delivered to them in their homes by outside entities. More often they just didn’t receive care at all. Because of this decreased use of health care services, sicker patients did not have as much data to contribute to predictive algorithms. And thus, algorithms during the pandemic likely under-identified these sicker patients.

Second, the outcomes that algorithms predict changed dramatically during COVID. Take, for example, an algorithm that predicts a patient’s risk of dying. While the algorithm may have been accurate at predicting death prior to COVID, the rate of death across the country increased by 40 percent between late 2019 and late 2020. The underlying relationships between risk factors and outcomes changed dramatically. So, algorithms can malfunction when the frequency of an outcome like death changes so much in such a short amount of time.

Third, COVID’s impact on health care and spending habits were particularly stark for marginalized populations, and that has led to algorithms being more likely to misfire for poor and nonwhite individuals. Prior to COVID, nonwhite and low-income Americans were significantly more likely to pay cash in a store rather than shop online. Fast-forward to the pandemic, where all segments of the U.S. population shifted from brick-and-mortar stores to online purchasing. A fraud detection algorithm may have been more likely to flag purchases from low-income individuals and minorities who seemingly suddenly changed their purchasing patterns toward more online shopping.

 

The pandemic has compromised our algorithms. But there are ways to fix this problem—and prevent it from happening again.

First, humans should exercise greater oversight over AI algorithms—at least for the time being. Any organization that uses pre-COVID AI algorithms should double-check their performance, particularly for how they are affecting marginalized groups like Black Americans and other minorities.

Second, if these checks reveal any red flags, organizations should redevelop (or “retrain”) their algorithms using data from the pandemic era. This is particularly relevant for algorithms that use inputs that are still affected by COVID.Third, we need to develop algorithms that are robust to future disruption. Novel AI techniques may be able to “self-learn” during different crises. During the pandemic, a reinforcement learning algorithm used by border control agencies in Greece successfully limited the influx of asymptomatic travelers infected with COVID-19. The algorithm was able to adjust to different phases of the pandemic, with four times greater accuracy than random surveillance testing at identifying asymptomatic carriers. Carefully designed AI may not be vulnerable to the same problems that we are currently seeing due to COVID.

Algorithms can improve efficiency in a variety of industries. But the pandemic has provided several examples of AI algorithms going awry without people realizing it. This is a serendipitous opportunity to develop and test ways to reduce vulnerability to similar “shocks” in the future. That way, the next pandemic, economic downturn, or other global disruption won’t incapacitate our algorithms along with it.

4 comments:

Anonymous said...

smm heart I've read your post several times because, for the most part, your viewpoints smm panel

Leo Daniels said...

I really enjoyed all of articles on this website, I hope you will continue to have similar posts to share with everyone.

Toby Briggs said...

This is such an amazing content. Great spread of knowledge.

Andy Mark said...

Amazing Great Bro Thanks For Information.

Post a Comment