A Blog by Jonathan Low

 

Feb 26, 2020

The Reason There Are Serious Problems With Europe's Vision For AI

It wants to safeguard its population from the deleterious effects of AI but also wants to be taken seriously as a player in the field.

The primary problem is that those two desires may well be mutually exclusive. JL

Javier Espinoza and Madhumita Murgia report in the Financial Times:

Europe has unveiled a set of strict rules for the development and use of AI, as it tries to make an ethical approach into a competitive advantage over China and the US. (But) EU red tape will strangle start-ups: all “high-risk” AI applications will be subject to a compulsory assessment before entering the market. The commission introduced obligations for data quality, and suggested European AI algorithms be based on European data. “European data is not unique or accurate, technically robust (and) is not sufficiently representative."
Europe has unveiled a set of strict rules and safeguards for the development and use of artificial intelligence, as it tries to make an ethical approach to the new technology into a competitive advantage over China and the US. “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights,” said Ursula von der Leyen, the president of the European Commission. But experts on both sides of the debate pointed to a range of problems with the new AI strategy, with some arguing that the rules will stifle innovation and others suggesting the framework should do more to protect the public from invasive technology such as facial recognition cameras. Here are four of the main issues that have caused concern: 1. EU red tape will strangle start-ups The EU said that all “high-risk” AI applications will be subject to a compulsory assessment before entering the market. Artificial intelligence systems could also be subjected to liability and certification checks of the underlying algorithms and the data used in the development of the technology, under the new plans. But the tech industry said the approach focuses too heavily on the risks of AI, and will send a “chilling” message to AI researchers and developers. “Europe should focus less on the potential harms of using AI” if it wants to lead the way, argued Guido Lobrano, vice-president of the ITI lobby group, which represents the likes of Apple, Google and Microsoft. Christian Borggreen, vice-president of computer industry group CCIA Europe, said many applications could be seen as high-risk and face unnecessary hurdles. “For example, an AI application that detects the spreading of the coronavirus might have to wait months before it could be used in Europe,” he warned.  Others said the definition of “high-risk” is too broad and only large tech companies will be able to afford the cost of compliance. Eline Chivot, senior policy analyst at the Center for Data Innovation think-tank, said “poorly defined” categories would “deter or delay investment” for services, some of which are already restricted by the EU’s data privacy laws. 2. The need for ‘European’ AI The commission’s white paper introduced new obligations for data quality, and suggested that European AI algorithms should be based on European data. “This raises two issues,” said Ms Chivot. “First, European data is not unique or necessarily highly accurate and technically robust. Second, European data is not sufficiently representative, and using it as a benchmark would be at odds with the objective of achieving fairness and diversity.” The cost of retraining algorithms created elsewhere in the world on EU data may again be prohibitive for smaller companies, and could also drive away talent, others warned. Karina Stan, a lobbyist at the Developers Alliance, said: “What the EU should always have in mind is that the digital economy is global, and the inventors of tomorrow will go to where the opportunities are the best.” 3. How to accurately assess risk? Some campaigners said that while the EU is correct to focus on high-risk sectors, such as healthcare, it is worryingly unconcerned about the spread of AI throughout the economy. “What I am specifically worried about is what about high-risk applications in low-risk sectors? For example, the use of AI systems by online employment firms like LinkedIn, which we know can sometimes structurally exclude women from seeing job postings,” said Corinne Cath, a digital anthropologist and PhD student at the Oxford Internet Institute, who focuses on the politics of AI governance. “This question of defining high-risk applications in low-risk sectors will be responded to by many people.”  She added that while the strategy looks closely at the private sector, it “largely excluded” the public sector from high-risk categories. “We know . . . that these AI systems can have really detrimental effects on the marginalised, so the fact that it was largely encouraging of these uses and [the risks] weren’t mentioned was really disappointing.” 4. The proposals were watered down Earlier drafts of the EU’s strategy suggested technologies that pose a risk to privacy, in particular the use of facial recognition in public places, should be carefully assessed and even banned until more is known about their usefulness and their impact on society. But the authors of the strategy toned down these recommendations, even as the technologies become widely commercially available. “In earlier versions, it was more daring. There were more explicit examples in there of how Europe could really make sure the use of AI systems would be according to European values, like the face recognition moratorium. I feel they ceded a lot of ground in this paper both to industry and member states,” said Ms Cath. But not everyone thinks the AI plans are lacking. Andreas Schwab, a German MEP and longtime Google critic, said citizens will welcome the new EU proposals. “The principle is that in Europe it is still the state that decides and not the big companies.  “Most Europeans will be happy about this.”  The new proposals are now undergoing a 12-week open consultation process.

0 comments:

Post a Comment