A Blog by Jonathan Low

 

Feb 23, 2018

Artificial Intelligence Is Getting Cheaper, Easier To Use...And To Manipulate

Co-evolution. JL

Cade Metz reports in the New York Times:

Rapidly evolving and increasingly affordable A.I. technologies could be used for malicious purposes. “Drones have captured the imagination. What is harder to anticipate is all the less tangible ways that A.I. is being integrated into our lives.” A computer-vision system can be fooled into seeing things that are not there. That sounds like an endless cat-and-mouse game between A.I. systems trying to create fake content and those trying to identify it.“We need to assume that there will be advances on both sides.”
A Silicon Valley start-up recently unveiled a drone that can set a course entirely on its own. A handy smartphone app allows the user to tell the airborne drone to follow someone. Once the drone starts tracking, its subject will find it remarkably hard to shake.
The drone is meant to be a fun gadget — sort of a flying selfie stick. But it is not unreasonable to find this automated bloodhound a little unnerving.
A group of artificial intelligence researchers and policymakers from prominent labs and think tanks in both the United States and Britain released a report that described how rapidly evolving and increasingly affordable A.I. technologies could be used for malicious purposes. They proposed preventive measures including being careful with how research is shared: Don’t spread it widely until you have a good understanding of its risks.
A.I. experts and pundits have discussed the threats created by the technology for years, but this is among the first efforts to tackle the issue head-on. And the little tracking drone helps explain what they are worried about.
The drone, made by a company called Skydio and announced this month, costs $2,499. It was made with technological building blocks that are available to anyone: ordinary cameras, open-source software and low-cost computer chips.Continue reading the main story
In time, putting these pieces together — researchers call them dual-use technologies — will become increasingly easy and inexpensive. How hard would it be to make a similar but dangerous device?
“This stuff is getting more available in every sense,” said one of Skydio’s founders, Adam Bry. These same technologies are bringing a new level of autonomy to cars, warehouse robots, security cameras and a wide range of internet services.
But at times, new A.I. systems also exhibit strange and unexpected behavior because the way they learn from large amounts of data is not entirely understood. That makes them vulnerable to manipulation; today’s computer vision algorithms, for example, can be fooled into seeing things that are not there.
“This becomes a problem as these systems are widely deployed,” said Miles Brundage, a research fellow at the University of Oxford’s Future of Humanity Institute and one of the report’s primary authors. “It is something the community needs to get ahead of.”
Advances in artificial intelligence have made it easier to fake audio and video by grafting a person's face onto another person's body. 
The report warns against the misuse of drones and other autonomous robots. But there may be bigger concerns in less obvious places, said Paul Scharre, another author of the report, who had helped set policy involving autonomous systems and emerging weapons technologies at the Defense Department and is now a senior fellow at the Center for a New American Security.
“Drones have really captured the imagination,” he said. “But what is harder to anticipate — and wrap our heads around — is all the less tangible ways that A.I. is being integrated into our lives.”
The rapid evolution of A.I. is creating new security holes. If a computer-vision system can be fooled into seeing things that are not there, for example, miscreants can circumvent security cameras or compromise a driverless car.
Researchers are also developing A.I. systems that can find and exploit security holes in all sorts of other systems, Mr. Scharre said. These systems can be used for both defense and offense.
Automated techniques will make it easier to carry out attacks that now require extensive human labor, including “spear phishing,” which involves gathering
and exploiting personal data of victims. In the years to come, the report said, machines will be more adept at collecting and deploying this data on their own.
A.I. systems are increasingly adept at generating believable audio and video on their own. This will accelerate the progress of virtual reality, online games and movie animation. It will also make it easier for bad actors to spread misinformation online, the report said.
This is already beginning to happen through a technology called “Deepfakes,” which provides a simple way of grafting anyone’s head onto a pornographic video — or put words into the mouth of the president.
Some believe concerns over the progress of A.I. are overblown. Alex Dalyac, chief executive and co-founder of a computer vision start-up called Tractable, acknowledged that machine learning will soon produce fake audio and video that humans cannot distinguish from the real thing. But he believes other systems will also get better at identifying misinformation. Ultimately, he said, these systems will win the day.
To others, that sounds like an endless cat-and-mouse game between A.I. systems trying to create the fake content and those trying to identify it.
“We need to assume that there will be advances on both sides,” Mr. Scharre said.

0 comments:

Post a Comment