A Blog by Jonathan Low

 

Jan 12, 2021

How Better Algorithms Could Improve the Covid Vaccine Rollout

Algorithims are created by humans, reflecting all of their foibles, biases, inclinations and blind spots. 

Proceed accordingly. JL

Ravi Parikh and Amol Navathe report in the New York Times:

Algorithms for vaccine distribution should make sure the algorithm’s outcome is what we actually care about; take the time to do a dry run. Doing so will allow algorithm developers, leaders and the public to sanity check the algorithm; use algorithms only when you need to; publicize algorithm inputs and rules in advance of the rollout, to provide  transparency but also time for feedback;  monitor and update algorithms after deployment to evaluate whether algorithms are missing populations and adjust accordingly. Algorithms are adjuncts to human decision making. They shouldn’t be making decisions themselves.

Stanford University’s health system has come under fire for vaccinating executives who worked from home ahead of nearly all its medical residents and fellows in intensive care units. An internal email explained that the poor prioritization was caused by an algorithm that had gone awry.

This was a dodge. After all, who created the algorithm? Bad planning — not bad algorithms — was responsible for Stanford’s disastrous rollout. But it is critical that we learn from Stanford’s mistakes. As the pool of eligible individuals grows over the next few months while vaccine supplies remain scarce, we will need algorithms to help us make quick decisions about which people or communities to target first.

The Stanford algorithm used simple, objective criteria — a good start. It primarily relied on two factors to determine priority: Age and coronavirus infection rates in a staff member’s department.

So what went wrong?

First, by prioritizing age, it focused mostly on a recipients’ risk of serious illness or death from the disease while ignoring the risks of transmission. We don’t know for sure whether the vaccines stop people from spreading the virus, but the Food and Drug Administration has pointed out that most vaccines that protect against viruses do. An algorithm that properly took this into account would not have downgraded young residents and fellows who work where the risk of infection is the highest.

Additionally, developers and executives failed to sanity-check the algorithm to determine whether its outputs were reasonable. Any clinical leader who looked at the results before rollout would have easily identified the glaring lack of frontline residents.

Everyone using algorithms for vaccine distribution should follow five key principles:

First, make sure the algorithm’s outcome is what we actually care about. Just over a year ago, researchers at the University of California, Berkeley, showed that an algorithm used by a large health insurer to predict a patient’s risk of being hospitalized was racially biased. Because the algorithm focused on health care costs rather than illness severity, African-American patients who were sicker but appeared low cost (because they were on Medicaid) were deprioritized.

Second, take the time to do a dry run. Doing so will allow algorithm developers, leaders and the public to sanity check the algorithm and avoid unintended catastrophes.

Third, use algorithms only when you need to. There will be no substitute for efficient operations and coordinated outreach to places like nursing homes and assisted-living facilities. We should try to vaccinate every individual at these places as quickly as possible; we don’t need an algorithm to tell us that.

Fourth, build public trust. Health systems should publicize algorithm inputs and rules well in advance of the rollout, to provide not only transparency but also enough time for feedback.

Fifth, monitor and update your algorithm even after vaccine deployment. Health systems and public health agencies should evaluate whether algorithms are missing important populations and adjust accordingly.

The pool of vaccine-eligible individuals will expand by hundreds of millions over the next few months. We need algorithms to save as many lives as possible. But algorithms are only adjuncts to human decision making. They shouldn’t be making decisions themselves.

0 comments:

Post a Comment