Recently, the European Union issued its long-anticipated white paper on artificial intelligence which is a prequel to new legislation and regulations governing the technology that are likely to have global consequences. That’s because, as with Europe’s privacy law, GDPR, any new AI rules are likely to apply to anyone who sells to an EU customer, processes the data of an EU citizen, or has a European employee. And, as with GDPR, any rules Europe enacts may serve as a model for other nations—or even individual US states—looking to regulate AI.
The paper says that the 27-nation bloc should have strict legal requirements for “high-risk” uses of the technology.
However, experts on both sides of the debate pointed to a range of problems with the new AI strategy, with some arguing that the rules will stifle innovation and others suggesting the framework should do more to protect the public from invasive technology such as facial recognition cameras.
According to a Financial Times report, here are the main issues that have caused concern:
The EU said that all “high-risk” AI applications will be subject to a compulsory assessment before entering the market. Artificial intelligence systems could also be subjected to liability and certification checks of the underlying algorithms and the data used in the development of the technology, under the new plans. But the tech industry said the approach focuses too heavily on the risks of AI and will send a “chilling” message to AI researchers and developers. Guido Lobrano, vice-president of the ITI lobby group, which represents the likes of Apple, Google and Microsoft argued, “Europe should focus less on the potential harms of using AI” if it wants to lead the way.
Christian Borggreen, vice-president of computer industry group CCIA Europe, said many applications could be seen as high-risk and face unnecessary hurdles. “For example, an AI application that detects the spreading of the coronavirus might have to wait months before it could be used in Europe,” he warned.
Others said the definition of “high-risk” is too broad and only large tech companies will be able to afford the cost of compliance.
Eline Chivot, the senior policy analyst at the Center for Data Innovation think-tank, said “poorly defined” categories would “deter or delay investment” for services, some of which are already restricted by the EU’s data privacy laws.
Moreover, the commission’s white paper introduced new obligations for data quality and suggested that European AI algorithms should be based on European data.
“This raises two issues,” said Ms Chivot. “First, European data is not unique or necessarily highly accurate and technically robust. Second, European data is not sufficiently representative, and using it as a benchmark would be at odds with the objective of achieving fairness and diversity.”
The cost of retraining algorithms created elsewhere in the world on EU data may again be prohibitive for smaller companies, and could also drive away talent, others warned.
Karina Stan, a lobbyist at the Developers Alliance, said: “What the EU should always have in mind is that the digital economy is global, and the inventors of tomorrow will go to where the opportunities are the best.”
Furthermore, some campaigners said that while the EU is correct to focus on high-risk sectors, such as healthcare, it is worryingly unconcerned about the spread of AI throughout the economy.
“What I am specifically worried about is what about high-risk applications in low-risk sectors? For example, the use of AI systems by online employment firms like LinkedIn, which we know can sometimes structurally exclude women from seeing job postings,” said Corinne Cath, a digital anthropologist and Ph.D. student at the Oxford Internet Institute, who focuses on the politics of AI governance.
“This question of defining high-risk applications in low-risk sectors will be responded to by many people.”
She added that while the strategy looks closely at the private sector, it “largely excluded” the public sector from high-risk categories.“We know… that these AI systems can have really detrimental effects on the marginalized, so the fact that it was largely encouraging of these uses and [the risks] weren’t mentioned was really disappointing.”
Besides, earlier drafts of the EU’s strategy suggested technologies that pose a risk to privacy, in particular the use of facial recognition in public places, should be carefully assessed and even banned until more is known about their usefulness and their impact on society.
But the authors of the strategy toned down these recommendations, even as the technologies become widely commercially available.
“In earlier versions, it was more daring. There were more explicit examples in there of how Europe could really make sure the use of AI systems would be according to European values, like the face recognition moratorium. I feel they ceded a lot of ground in this paper both to industry and member states,” said Ms Cath.
But not everyone thinks the AI plans are lacking. Andreas Schwab, a German MEP, and longtime Google critic, said citizens will welcome the new EU proposals. “The principle is that in Europe it is still the state that decides and not the big companies.