A Blog by Jonathan Low

 

Nov 1, 2017

How Do You Regulate a Self-Improving Algorithm?

Whether it's self-driving cars, artificially-intelligent medical devices and pharmaceuticals or Facebook ads, machine learning is changing the way society manages, improves and regards itself.

The challenge is figuring out whether anyone or thing can and should review all these innovations and if the polity decides it should, as seems likely, eventually, then how to actually accomplish that goal? JL


Jonathan Kay reports in The Atlantic:

Algorithms based on the AI principle of machine learning now can outperform dermatologists at recognizing skin cancers. They can beat cardiologists in detecting arrhythmias in EKGs. Algorithms can identify obscure subcategories of adult-onset brain cancer, estimate the survival rates of breast-cancer patients, and reduce unnecessary thyroid surgeries. Meta-algorithms spit out new products every time fresh data is added, producing a potentially infinite number of newly minted “medical devices” every day.
At a large technology conference in Toronto this fall, Anna Goldenberg, a star in the field of computer science and genetics, described how artificial intelligence is revolutionizing medicine. Algorithms based on the AI principle of machine learning now can outperform dermatologists at recognizing skin cancers in blemish photos. They can beat cardiologists in detecting arrhythmias in EKGs. In Goldenberg’s own lab, algorithms can be used to identify hitherto obscure subcategories of adult-onset brain cancer, estimate the survival rates of breast-cancer patients, and reduce unnecessary thyroid surgeries.
It was a stunning taste of what’s to come. According to McKinsey Global Institute, large tech companies poured as much as $30 billion into AI in 2016, with another $9 billion going into AI start-ups. Many people already are familiar with how machine learning—the process by which computers automatically refine an analytical model as new data comes in, teasing out new trends and linkages to optimize predictive power—allows Facebook to recognize the faces of friends and relatives, and Google to know where you want to eat lunch. These are useful features—but pale in comparison to the new ways in which machine learning will change health care in coming years.
The science is unstoppable, and so is the flow of funding. But at least one roadblock stands in the way: a big, bureaucratic Cold War–era regulatory apparatus that could prove to be fundamentally incompatible with the very nature of artificial intelligence.
* * *
Every professional subculture has its heroes. At the Food and Drug Administration, the greatest hero is Frances Oldham Kelsey, who in the 1960s stubbornly refused to license Kevadon, a sedative that alleviated symptoms of morning sickness in pregnant women. As mothers in other countries would learn, the drug—better known by its generic name, thalidomide—could cause horrible birth defects. Kelsey’s vigilance in the face of heavy corporate pressure helped inspire the rigorous evaluation model that the FDA now applies to everything from pharmaceuticals to hospital equipment to medical software.
At the very core of this model is the assumption that any product may be clinically tested, produced, marketed, and used in a defined, unchanging form. That’s why the blood-pressure machines many people use in pharmacies look a lot like the ones they used a decade ago. Deviation from an old FDA-approved model often requires an entirely new approvals process, with all the attendant costs and delays.*
But that build-and-freeze model isn’t the way AI software development typically works—especially when it comes to machine-learning processes. These systems are essentially meta-algorithms that spit out new operational products every time fresh data is added—producing, in effect, a potentially infinite number of newly minted “medical devices” every day. (A nonmedical example would be the speech-recognition programs that gradually teach themselves how to better understand a user’s voice.) This phenomenon is creating a culture gap between the small, nimble medical-software boutiques creating these technologies, and the legacy regulatory system that developed to serve large corporate manufacturers.
Consider, for instance, Cloud DX: This Canadian company uses AI technology to scrutinize the audio waveform of a human cough, which allows it to detect asthma, tuberculosis, pneumonia, and other lung diseases. In April, the California-based XPRIZE foundation named Cloud DX its “Bold Epic Innovator” in its Star Trek–inspired Qualcomm Tricorder competition, whereby participants were asked to create a single device that an untrained person could use to measure their vital signs. The company received a $100,000 prize and lots of great publicity—but doesn’t yet have FDA approval to market this product for clinical applications. And getting such approval may prove difficult.
Which helps explain why many health-software innovators are finding other, creative ways to get their ideas to market. “There’s a reason that tech companies like Google haven’t been going the FDA route [of clinical trials aimed at diagnostic certification],” says Robert Kaul, the founder and CEO of Cloud DX. “It can be a bureaucratic nightmare, and they aren’t used to working at this level of scrutiny and slowness.” He notes that just getting a basic ISO 13485 certification, which acts as a baseline for the FDA’s device standards, can cost two years and seven figures. “How many investors are going to give you that amount of money just so you can get to the starting line?”
“Twenty percent of my company’s head count is devoted exclusively to regulatory issues,” says Vic Gundotra, a former Google executive who now runs a medical company that detects heart issues early. “At Google, sometimes we’d decide on something, and we’d ship it six weeks later. So when I got here, and we had a breakthrough, I’d say, ‘How fast can we ship this out?’ And they’d say, ‘Two years.’ That digital creed of ‘Move fast and break things’ just doesn’t work.”
Kaul is hopeful, because he believes that the FDA contacts he’s garnered through the XPRIZE will help Cloud DX navigate the system. And like everyone I spoke to for this article, he recognizes that the FDA has a necessary role in protecting patients from false claims and dangerous products. He even sees an upside to the agency’s dilatory processes. “For those few companies who do make it through, they have an enormous competitive advantage,” he says. “We won’t have to worry about the usual scenario: Two guys from Stanford in a garage inventing some app that instantly takes away all our business. We only have to worry about the big players—who might just buy us out instead of competing with us.”
* * *
Complaints about the FDA’s lengthy processes are an old story. Five years before becoming Donald Trump’s FDA Commissioner, for instance, Scott Gottlieb slammed the agency for needless delays in the assessment of lifesaving drugs for children afflicted with Hunter syndrome. But the need for reform has become more acute, as software algorithms have become a more critical component of health systems.
WinterLight Labs, a Canadian start-up, is developing machine-learning software that can detect various forms of cognitive impairment, including early-stage Alzheimer’s disease, by analyzing snippets of a patient’s speech. The technology is currently being tested at assisted-care facilities. But Liam Kaufman, the company’s CEO, is unsure whether or when his technology will be ready for FDA approval—in part because it is still unclear whether such approval would require that he freeze his product in a defined state. His alternative plan is to market the product as a screening tool, which does not purport to diagnose the presence of a medical condition, but merely provides guidance about when users should consult a doctor.
The larger risk, many entrepreneurs in the field told me, is not that new AI-enabled health technologies will go completely untapped, but that they will be shunted into the far less regulated sphere of general “wellness,” where they will be marketed as lifestyle products. An example Kaul cites in this regard is the Muse brain-sensing headband, a technology that could be adapted to all sorts of important medical applications, but which currently is being marketed as a gadget to help “elevate your meditation experience.”
Bakul Patel, the new associate center director for digital health at the FDA, has recently launched a pilot program, “FDA Pre-Cert,” which could eventually allow agency officials to focus their inspections on “the software developer or digital-health technology developer, rather than primarily at the product,” according to an announcement. (The nine corporate participants selected for the initial program include Apple, Fitbit, and Samsung, as well as several much smaller companies.) Official public statements seem to imply that these pre-certified companies might one day be permitted to optimize their software products without seeking FDA approval upon every iteration—though Patel, who comes to his job with a strong background in business development and technology, is studiously noncommittal on this point.
“We are evolving that space,” he says. “The legacy model is the one we know works. But the model that works continuously—we don’t yet have something to validate that. So the question is [as much] scientific as regulatory: How do you reconcile real-time learning [with] people having the same level of trust and confidence they had yesterday?”
In the meantime, Patel is “hiring like crazy” in an effort to ramp up the FDA’s
digital bench strength, according to Christina Farr, a CNBC reporter who covers medical regulation. But attracting the right people has proven difficult, because the field is so hot. As The New York Times reported this week, AI specialists with even minimal experience now are attracting compensation packages of more than $300,000 per year at large tech companies—far more than the FDA can afford to pay.
“Yes, it’s hard to recruit people in AI right now,” Patel acknowledges. “We have some understanding of these technologies. But we need more people. This is going to be a challenge.”

0 comments:

Post a Comment