A Blog by Jonathan Low

 

Apr 16, 2019

Internet and AI Regulation Are On the Rise

The question is whether it can - or will - be implemented. JL

Jamie Condliffe reports in the New York Times:

There’s a growing pipeline of  internet regulation, along with existing laws like the EU’s General Data Protection Regulation.“We’re entering a new phase of hyper regulation.” Much of the material they would police is abhorrent, and social media’s rapid rise has caught lawmakers off guard; now the public wants something done. But many proposed regulations lack plans for implementation. A new set of A.I. ethics guidelines from the European Commission contain requirements that A.I. systems should meet to be deemed trustworthy. They join ethical principles to recommendations companies will adopt and test, so that they can be improved.
Web regulators are getting into their groove. But are they going too quickly?
This past week, the British government proposed new powers to issue fines and make individual executives legally liable for harmful content on their platforms. My colleague Adam Satariano said it “would be one of the world’s most aggressive actions to rein in the most corrosive online content.”
Days earlier, Australia passed legislation that threatens fines for social media companies that fail to rapidly remove violent material. And there’s a growing pipeline of other internet regulation, along with existing laws like the European Union’s sweeping General Data Protection Regulation.
“We’re entering a new phase of hyper regulation,” said Paul Fehlinger, the deputy executive director of the Internet and Jurisdiction Policy Network, an organization established to understand how national laws affect the internet.

This flurry of content rules is understandable. Much of the material they would police is abhorrent, and social media’s rapid rise has caught lawmakers off guard; now the public wants something done.
But the regulations could have unintended consequences.
Difficulties in defining “harmful” mean governments will develop different standards. In turn, the web could easily look different depending on your location — a big shift from its founding principles. (This is already happening: The Chicago Tribune’s website, for example, doesn’t comply with General Data Protection Regulation, so there’s no access to it from Europe.)
There may be less visible effects. If regulation required differences at a hardware level, that could fragment the infrastructure, said Konstantinos Komaitis, a senior director at the nonprofit Internet Society, which promotes the open development and use of the internet. That could make the internet less resilient to outages and attacks.
And bigger, richer companies will find it easier to comply with sprawling regulation, which could reinforce the power of Big Tech.
“There is a major risk that we end up in a situation where short-term political pressure trumps long-term vision,” Mr. Fehlinger said.
Mr. Komaitis said avoiding unintended consequences was “very simple, yet very difficult.”
“It is all about collaboration,” he added. The idea: lawmakers work together across borders to ensure rules are more consistent.
The challenge is that collaboration could slow the pace of regulation that lawmakers desire. But Mr. Komaitis said many proposed regulations lack clear plans for implementation, and envisions snags when governments come to apply them. If they struggle, he said, collaboration and sharing of expertise may be the only way to make their plans work.
Artificial intelligence could make our lives easier and more efficient. But, like any powerful technology, it’s more complicated than that. A.I. can be used for surveillance. To control autonomous weapons. It can be biased. It could erode jobs. The list goes on.
None of those are reasons to reject A.I. outright. But they underscore how its development must be approached with care.
Big Tech has struggled to publicly demonstrate that care. Amazon, Google and Microsoft have all drawn criticism for their A.I. work with military and government agencies. Just this month, Google’s plan to create an A.I. ethics board ended disastrously when backlash about board members led to its dissolution. Missteps should be called out, especially when they’re made by such powerful corporations. But in an emergent field, mistakes also serve as lessons. And a new set of A.I. ethics guidelines from the European Commission is a good example of how trial and error will be a fundamental part of ethical A.I. development.
The guidelines, developed by 52 experts, contain seven requirements that A.I. systems should meet to be deemed trustworthy. What stands out about them for Charlotte Stix, a policy officer at the Leverhulme Center for the Future of Intelligence at Cambridge University, is that they’re designed to be carried out.
Unlike other A.I. ethics guidelines, they attempt to join ethical principles to firm recommendations — something that has divided opinions among some people working in the field. That’s why the European Commission hopes companies will adopt and test them between now and 2020, so that they can be improved.
Frank Buytendijk, a vice president in Gartner’s data and analytics group, said the guidelines sent a message to big tech companies that may have struggled with A.I. ethics in the past: “Here’s your chance to do the right thing.”

0 comments:

Post a Comment