Europe eyes strict rules for artificial intelligence

Authored by politico.eu and submitted by lughnasadh

The rules carve out an exception allowing authorities to use the tech if they're fighting serious crime | Image via iStock

Press play to listen to this article Voiced by Amazon Polly

No HAL 9000s or Ultrons on this continent, thank you very much.

The European Union wants to avoid the worst of what artificial intelligence can do — think creepy facial recognition tech and many, many Black Mirror episodes — while still trying to boost its potential for the economy in general.

According to a draft of its upcoming rules, obtained by POLITICO, the European Commission would ban certain uses of "high-risk" artificial intelligence systems altogether, and limit others from entering the bloc if they don't meet its standards. Companies that don't comply could be fined up to €20 million or 4 percent of their turnover. The Commission will unveil its final regulation on April 21.

The rules are the first of their kind to regulate artificial intelligence, and the EU is keen to highlight its unique approach. It doesn't want to leave powerful tech companies to their own devices like in the U.S., nor does it want to go by the way of China in harnessing the tech to fashion a surveillance state. Instead, the bloc says it wants a "human-centric" approach that both boosts the tech, but also keeps it from threatening its strict privacy laws.

That means AI systems that streamline manufacturing, model climate change, or make the energy grid more efficient would be welcome. But many technologies currently in use in Europe today, such as algorithms used to scan CVs, make creditworthiness assessments, hand out social security benefits or asylum and visa applications, or help judges make decisions, would be labeled as "high risk," and would be subject to extra scrutiny.

Social scoring systems, such as those launched in China that track the trustworthiness of people and businesses, are classified as "contravening the Union values" and are going to be banned.

The proposal also wants to prohibit AI systems that cause harm to people by manipulating their behavior, opinions or decisions; exploit or target people's vulnerabilities; and for mass surveillance.

But the rules carve out an exception allowing authorities to use the tech if they're fighting serious crime. The use of facial recognition technology in public places, for example, could be allowed if its use is limited in time and geography. The Commission said it would allow for exceptional cases in which law enforcement officers could use facial recognition technology from CCTV cameras to find terrorists, for example.

The exception is likely designed to appease countries like France, which is keen to integrate AI into its security apparatus, but is opposed by privacy hawks and digital rights activists who have lobbied hard for these uses to be banned outright.

“Giving discretion to national authorities to decide which use cases to permit or not simply recreates the loopholes and grey areas that we already have under current legislation and which have led to widespread harm and abuse,” said Ella Jakubowska of digital rights group EDRi.

The EU is also keen to avoid issues of racial and gender bias, which have plagued the development of the technology from its inception. One of the Commission’s requirements in the draft are that data sets do not “incorporate any intentional or unintentional biases” which may lead to discrimination.

The draft also proposes creating a European Artificial Intelligence Board, comprising one representative per EU country, the EU's data protection authority, and a European Commission representative. The board will supervise the law’s application and share best practices.

Industry is likely to take issue with the stringent rules, which they say would make the EU market less appealing and encourage European innovators to launch elsewhere.

The strict rules could also put the EU in the crosshairs of its allies. The U.S., which is far more concerned about countering China, and whose companies are likely to be subject to the regulation, is likely to have preferred a looser bill. Surveillance remains one of Europe's key sticking points when it comes to transatlantic collaboration with the U.S.

In an interview with POLITICO in March, Eric Schmidt, Google's former chief and chair of the U.S. National Security Commission on Artificial Intelligence (NSCAI), said Europe's strategy won't be successful, as it is "simply not big enough" to compete.

"Europe will need to partner with the United States on these key platforms,” Schmidt said, referring to American big tech companies which dominate the development of AI technologies.

This article is part of POLITICO’s premium Tech policy coverage: Pro Technology. Our expert journalism and suite of policy intelligence tools allow you to seamlessly search, track and understand the developments and stakeholders shaping EU Tech policy and driving decisions impacting your industry. Email [email protected] with the code ‘TECH’ for a complimentary trial.

PlankLengthIsNull on April 15th, 2021 at 09:15 UTC »

How long until Facebook threatens to stop offering their service in the EU because this will hamper their ability to make you personalized ads from your data?

NoxFortuna on April 15th, 2021 at 08:53 UTC »

Perhaps Black Mirror should remain television, and not reality.

There is apparently a fine line to tread between the progress of technological business problem solving and the exploitation of human rights.

Utoko on April 15th, 2021 at 07:36 UTC »

"AI systems that manipulate human behavior"

Isn't that already happening in Facebook and co. Figuring out what to show people with help of AI and how to get the people to spend the highest possible amount of time on facebook.

Also in the ad space. Figuring out what type of ad to show in Facebook manipulates them to buy the thing.

or Amazon using AI for pricing and product placement.