A Blog by Jonathan Low

 

Jun 22, 2023

How AI Like ChatGPT Could Be Used To Launch the Next Pandemic

The genie cannot be stuffed back in the bottle. More comprehensive and stronger controls on AI generally, and generative AI specifically, are going to be essential. JL 

Kelsey Piper reports in Vox:

When first released, AI systems like ChatGPT gave detailed, correct instructions about how to carry out a biological weapons attack or build a bomb. Open AI has corrected this. But a class exercise at MIT found it was easy for undergraduates without background in biology to get detailed suggestions for biological weaponry out of AI systems. “In one hour, the chatbots suggested four pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, identified protocols and how to troubleshoot them." “We need better controls.”

Here’s an important and arguably unappreciated ingredient in the glue that holds society together: Google makes it moderately difficult to learn how to commit an act of terrorism. The first several pages of results for a Google search on how to build a bomb, or how to commit a murder, or how to unleash a biological or chemical weapon, won’t actually tell you much about how to do it.

It’s not impossible to learn these things off the internet. People have successfully built working bombs from publicly available information. Scientists have warned others against publishing the blueprints for deadly viruses because of similar fears. But while the information is surely out there on the internet, it’s not straightforward to learn how to kill lots of people, thanks to a concerted effort by Google and other search engines.

 

How many lives does that save? That’s a hard question to answer. It’s not as if we could responsibly run a controlled experiment where sometimes instructions about how to commit great atrocities are easy to look up and sometimes they aren’t.

But it turns out we might be irresponsibly running an uncontrolled experiment in just that, thanks to rapid advances in large language models (LLMs).

Security through obscurity

When first released, AI systems like ChatGPT were generally willing to give detailed, correct instructions about how to carry out a biological weapons attack or build a bomb. Over time, Open AI has corrected this tendency, for the most part. But a class exercise at MIT, written up in a preprint paper earlier this month and covered last week in Science, found that it was easy for groups of undergraduates without relevant background in biology to get detailed suggestions for biological weaponry out of AI systems.

“In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organization,” the paper, whose lead authors include MIT biorisk expert Kevin Esvelt, says.

To be clear, building bioweapons requires lots of detailed work and academic skill, and ChatGPT’s instructions are probably far too incomplete to actually enable non-virologists to do it — so far. But it seems worth considering: Is security through obscurity a sustainable approach to preventing mass atrocities, in a future where information may be easier to access?

In almost every respect, more access to information, detailed supportive coaching, personally tailored advice, and other benefits we expect to see from language models are great news. But when a chipper personal coach is advising users on committing acts of terror, it’s not so great news.

But it seems to me that you can solve the problem from two angles.

Controlling information in an AI world

“We need better controls at all the chokepoints,” Jaime Yassif at the Nuclear Threat Initiative told Science. It should be harder to induce AI systems to give detailed instructions on building bioweapons. But also, many of the security flaws that the AI systems inadvertently revealed — like noting that users might contact DNA synthesis companies that don’t screen orders, and so would be more likely to authorize a request to synthesize a dangerous virus — are fixable!

We could require all DNA synthesis companies to do screening in all cases. We could also remove papers about dangerous viruses from the training data for powerful AI systems — a solution favored by Esvelt. And we could be more careful in the future about publishing papers that give detailed recipes for building deadly viruses.

The good news is that positive actors in the biotech world are beginning to take this threat seriously. Ginkgo Bioworks, a leading synthetic biology company, has partnered with US intelligence agencies to develop software that can detect engineered DNA at scale, providing investigators with the means to fingerprint an artificially generated germ. That alliance demonstrates the ways that cutting-edge technology can protect the world against the malign effects of ... cutting-edge technology.

AI and biotech both have the potential to be tremendous forces for good in the world. And managing risks from one can also help with risks from the other — for example, making it harder to synthesize deadly plagues protects against some forms of AI catastrophe just like it protects against human-mediated catastrophe. The important thing is that, rather than letting detailed instructions for bioterror get online as a natural experiment, we stay proactive and ensure that printing biological weapons is hard enough that no one can trivially do it, whether ChatGPT-aided or not.

2 comments:

Digitozone said...

The visuals you included in this post enhance the overall presentation. They complement the text perfectly and make the content more engaging. Looking forward to more informative posts with great visuals!
Best Drug & Alcohol Rehab Center in India

Digitozone said...

Kudos on a well-researched article! I learned a lot about [How AI Like ChatGPT Could Be Used To Launch the Next Pandemic] and the resources you provided are a great bonus. I'll be sure to share this with my network. Keep it up!
christmas ppt templates

Post a Comment