A Blog by Jonathan Low


Apr 12, 2022

How Hospitals Are Using AI To Save Lives

Processing data and using algorithms to identify risks humans may not be able to see. JL 

Laura Landro reports in the Wall Street Journal:

Hospitals are making a bet that AI can help identify and treat patients at highest risk in their ERs, inpatient wards and intensive-care units, for dangers including deadly sepsis and an impending cardiac arrest or stroke. AI algorithms are processing troves of data in electronic medical records, searching for patterns to predict outcomes and recommend treatments. They are creating early-warning systems to help spot subtle changes in a patient’s condition that aren’t visible in a busy unit, and predicting which patients about to be discharged from the hospital are at highest risk of being readmitted.

An algorithm may hold the key to saving your life in the emergency room.

Hospitals are making a bet that artificial intelligence can help identify and treat patients at highest risk in their ERs, inpatient wards and intensive-care units, for dangers including the deadly infection sepsis and an impending cardiac arrest or stroke.

Artificial-intelligence algorithms are processing vast troves of data in electronic medical records, searching for patterns to predict future outcomes and recommend treatments. They are creating early-warning systems to help hospital staff spot subtle but serious changes in a patient’s condition that aren’t always visible or noticed in a busy unit, and predicting which patients about to be discharged from the hospital are at highest risk of being readmitted.

These systems are just one effort in a vast array of AI projects in healthcare—from helping detect cancer in radiology images to identifying which drugs to test on patients with different diseases. But this prediction technology holds especially significant promise to transform care and improve patient safety in ER and ICU cases—as long as the systems can be designed to avoid some of the medical, technological and ethical concerns that have emerged in mixing the science of machine learning with the art of medicine.

AI to Improve Sepsis Response

Sepsis, an extreme reaction to infection, can be deadly and is often difficult to diagnose. Artificial intelligence has had mixed results in helping to detect and predict sepsis in hospital patients so that it can be caught and treated earlier, but doctors and data scientists are refining models. Here's how one such application, Duke University's Sepsis Watch, works

The extent of the danger

Machine is trained to watch out for sepsis

In 2015, fewer than half of patients nationwide were receiving appropriate care for severe sepsis and septic shock, according to the Centers for Medicare and Medicaid services:




32 million

data points

are drawn

from those









21.3% of those

patients had











Those numbers have since improved, with the percentage of patients receiving appropriate care at Duke University Hospital climbing from well below the national average to well above the national average since the hospital started using Sepsis Watch

Sepsis Watch is based on data from 42,000 inpatient encounters, with 21.3% of those patients having experienced sepsis. The data includes 25 million vital-sign measurements, 5.2 million lab results and 2 million medication administrations.


All this data is used by Sepsis Watch as a starting point in diagnosing or predicting sepsis as it constantly monitors patients' vital signs, medications and lab results.

Sepsis comes on and progresses quickly, often within hours of a patient entering the Emergency Department. So Duke focused on detecting and predicting sepsis as quickly as possible for incoming patients and improving the speed of treatment.

How Sepsis Watch tracks patients

A rapid- response nurse monitors Sepsis Watch as it analyzes every patient entering the Emergency Department




If patients have systemic inflammatory response syndrome (SIRS), including high temperature, heart rate and respiratory rate, and damage to internal organs , they are flagged by Sepsis Watch as meeting sepsis criteria. If the criteria are not met, Sepsis Watch could flag the patient with color-coded cards as high, medium or low risk of sepsis. The cards are updated every five minutes with fresh data from the patients.

Alert and



The rapid-response nurse confers with the attending physician about patients flagged as having sepsis or being at high risk of sepsis. The physician independently reviews the medical record and evaluates the patient to make the decision to treat sepsis. The treatment bundle is handled in two stages: A three-hour bundle of interventions followed by a six-hour bundle of interventions including antibiotics.

Sources: Duke University Hospital; Centers for Medicare and Medicaid Services

“Clinicians still have to be in the driver’s seat, but artificial intelligence and predictive models provide us with a way to put the most insights gleaned from voluminous amounts of data at their fingertips, so at the right moment of care it can improve patient outcomes,” says Vincent X. Liu, a researcher and intensive-care specialist at Oakland, Calif.-based hospital and nonprofit health plan Kaiser Permanente.


Here’s a look at some of the efforts under way.

Avoiding a ‘Code Blue’

Once a patient’s deteriorating condition triggers an emergency like a Code Blue—which hastens a team to the bedside—it is often too late to prevent the patient from needing life-support therapy or intensive care.

By using data analytics to predict a patient’s downward spiral up to 12 hours in advance, such emergencies could be prevented, and patients could either avoid the ICU or be in better shape when they got there, according to Dr. Liu.

Kaiser Permanente developed a predictive model called Advance Alert Monitor that can identify about half of patients who will deteriorate. It scans patient data continuously, assigning scores that predict the risk of transfer to the ICU or death. The time horizon allows staffers to reach patients when they are still relatively stable and may just need enhanced screening or monitoring. “It’s searching for the needles in the haystack, so it has to sift through all the patients to try to find those at highest risk,“ Dr. Liu says.

To minimize “alert fatigue,” the results aren’t shown directly to hospital staff, but rather are monitored remotely by specially trained nurses so bedside nurses can focus on seeing patients. If a patient’s score reaches a certain threshold, the remote nurse contacts the rapid-response nurse on the ward who in turn launches a formal assessment and contacts the patient’s physician, who can initiate a rescue program that could include a transfer to the ICU.

In a study at 19 of its hospitals over nearly three years, published last November in the New England Journal of Medicine, Kaiser Permanente reported that the predictive model was associated with lower hospital mortality, a lower incidence of ICU admission and a shorter length of stay, compared with hospitals that didn’t use the system. Kaiser now uses the program in 21 hospitals, with nurses handling more than 16,000 alerts a year.

Watching for sepsis

One of the most dangerous risks to patients is sepsis, which happens when an existing infection triggers a life-threatening chain reaction in the body, leading to organ failure and death if not treated promptly. Nearly one in three patients who die in a hospital has sepsis, which starts developing before they arrive in 87% of cases, according to the Centers for Disease Control and Prevention.

The majority of cases could be prevented with rapid diagnosis and treatment, but studies have shown many sepsis patients may not receive care consistent with guidelines. There is no gold standard for how to diagnose it, and symptoms of sepsis such as fever and a rapid heart rate can also accompany other illnesses, so it can be hard to determine who has it and who doesn’t. Some hospitals have found that algorithms designed by outside vendors and developers are based on data that is not relevant to their own patients, causing false alarms and concern about errors.

After finding that one model commonly used to detect sepsis was firing off false alarms, Duke University Hospital decided to create its own machine-learning model to predict sepsis quickly and accurately with data from its own patient records, according to Cara O’Brien, a hospital-medicine physician and assistant professor at Duke University School of Medicine.

Dr. O’Brien led a team that included doctors and nurses to train the model with more than 32 million data points such as vital-sign measurements,, lab reports and medication administration from more than 42,000 inpatient encounters analyzed over 14 months, of which 21.3% had a sepsis diagnosis. It culls data from a patient’s vital signs, medications and lab measurements every five minutes, analyzes 86 different variables, samples them multiple times and detects relationships that could signal the onset of sepsis.

The Sepsis Watch dashboard includes four color-coded lists of patients for triage, with those at high risk for sepsis in red. A single rapid-response-team nurse on a 12-hour shift monitors the dashboard on an iPad, and calls emergency physicians to discuss every patient with sepsis or at risk for it. No patient is put on treatment without a doctor’s OK.

Hospitals must publicly report compliance with so-called sepsis bundles—treatment guidelines that have been shown to improve outcomes and include actions like ordering antibiotics and performing certain lab tests within specific time windows from a patient’s arrival in the ER. Duke increased its compliance to 64% in the 15 months after starting Sepsis Watch, versus 31% in the 18 months prior. Mark Sendak, a physician and clinical-data scientist at Duke who co-led the project, says a final analysis is under way, but mortality appears to be down, and the algorithm is now used for every patient coming into the emergency department.

One of the largest hospital chains, HCA Healthcare, HCA +0.16% developed its own predictive algorithm called Spot—for Sepsis Prediction and Optimization of Therapy. Before its development, nurses would manually review patient data to check for sepsis primarily during shift changes, or if patients were transferred between units. In contrast, the algorithm was designed to continuously monitor vital signs, lab results, nursing reports and other data, firing an alert directly to nurses at the moment signals converge that indicate impending sepsis.

The alerts are presented to clinicians not just as answers, but as trigger factors for their clinical judgment about whether a patient has sepsis.

The hospital chain found that Spot detects sepsis six hours earlier and more accurately than clinicians; early recognition and treatment have reduced sepsis mortality across 160 hospitals by almost 30%.

HCA’s chief data scientist, Edmund Jackson, and his team used the Spot platform to develop a broader program called Nate, for Next-Gen Analytics for Treatment and Efficiency, using machine learning to more quickly detect other critical or life-threatening conditions such as shock in trauma patients, complications after surgery and early signs of deterioration in all patients.

In designing new algorithms, data scientists are collaborating with clinical staff to determine what predictive models can be most useful to them and how they can fit into the patient-care flow. One effort is focused on how care teams in labor and delivery units can best incorporate a predictive model that uses data from fetal heart monitors to help manage the risk of fetal distress proactively.

“We have a dedicated innovation team that goes into hospitals and works with caregivers at the bedside—we don’t show up one day and say, ‘Here’s an AI that’s been trained to do X for you,’ ” says Michael Schlosser, a neurosurgeon and senior vice president of care transformation and innovation at HCA.

During the Covid-19 pandemic, teams were able to use the Nate platform to develop algorithms specific to issues in those patients, such as one alerting intensive-care doctors, nurses and respiratory therapists to patients on mechanical ventilators whose treatment might need adjustment based on their condition.

Algorithms also hold promise for natural disasters, such as quickly assessing which patients can be safely evacuated before a hurricane, whereas in the past staffers had to rely on “using sticky notes on white boards in war rooms,” says Dr. Schlosser.

Readmission challenges

Hospitals are also using machine learning to solve one of their most persistent problems—how to identify which patients are at greatest risk for being readmitted to the hospital within 30 days of discharge.

Hospitals commonly use standard readmission risk-assessment scores that rely on a limited amount of data, such as how long the patient was in the hospital, how sick they were at admission, other diseases and conditions they have, and whether they visited the ER within six months before their admission. But the standard scores don’t take into account an individual hospital’s data from its own patient records.

For example, in a study at three hospitals published in 2019, researchers at the University of Maryland found that, compared with the commonly used readmission scores, a machine-learning score that used data on readmissions at the individual hospitals was able to better determine which patients needed more interventions to avoid coming back to the hospital.

But predicting readmissions is only one step toward preventing them, according to the study. Interventions to keep patients from returning are often costly and labor intensive, including referring them to discharge clinics, transitional care and telemonitoring. And they don’t always recognize so-called social determinants of health that stand in the way.

David Vawdrey, chief data informatics officer at Geisinger Health System, with 10 hospitals in Pennsylvania, says the lack of resources to help patients at highest risk of readmission is a major issue, but predictive algorithms hold promise for measures that can keep patients out of the hospital in the first place and make sure they have preventive screenings for serious disease.

For example, Geisinger worked with a company called Medial EarlySign to identify patients overdue for a colorectal-cancer screening, and used a machine-learning algorithm to flag those at higher risk. Patients were then called by a care manager who informed them of their risk and offered to schedule a colonoscopy. They were able to schedule one for 68.1% of the patients flagged, and approximately 70% had a significant finding, according to a recent report in NEJM Catalyst.

“The power of AI to prioritize that list allows us to reach out more intensively and take an extra step for those at highest risk that says, ‘It’s really time, you need to come in,’ ” says Dr. Vawdrey.

Finding the flaws

As artificial-intelligence systems take on more of a role in hospitals, researchers are also looking for ways to better identify when they don’t work, and why. Algorithms use statistical methods to learn key patterns from clinical data and predict future outcomes, but a number of factors can cause a mismatch between the data used to build the algorithm and its real-world use. Undetected, flaws like this could cause an algorithm to fail to diagnose severely ill patients or recommend harmful treatments.

Karandeep Singh, assistant professor of learning health sciences and internal medicine at the University of Michigan who chairs Michigan Medicine’s clinical-intelligence committee, says developers may take a model trained in one health system and start using it in a different one with different patient demographics or run a model in a hospital over time without updating it with new data.

For example, when Covid-19 began surging in hospitals around the country, a commonly used AI sepsis algorithm had no way to differentiate bacterial sepsis from Covid; the symptoms are similar but the treatment completely different. After nurses reported excessive sepsis alerts, the University of Michigan temporarily disabled the algorithm from April to July 2020. The university is now working on an alternative model.

Researchers are identifying other common causes of potential failure and finding ways to mitigate them. For example, Dr. Singh says, predictive models primarily trained on white populations often fail to perform well on patients from other racial or ethnic groups, but it is possible to retrain or redesign them with more inclusive data sets and use specialized algorithms.

“Right now, hospitals are overwhelmed by the number of AI models available to them,” Dr. Singh says. To safely use the tools in future, they have to “understand when AI is not working as intended, and prioritize problems based on whether they are solvable rather than simply what AI tools are available.”


Post a Comment