EU nations call for ‘soft law solutions’ in future Artificial Intelligence regulation

A conglomerate of 14 EU nations have set out their position on the future regulation of Artificial Intelligence, pressing the European Commission to adopt a 'soft law approach.'

Our online world lends itself well to a conversation with citizens about the future of Europe. And yet, European leaders have so far failed to reach agreement, even on who should chair the long promised Conference on the Future of Europe, writes Roger Casale. [Shutterstock]

Fourteen EU countries have set out their position on the future regulation of Artificial Intelligence, urging the European Commission to adopt a “soft law approach”.

In a position paper spearheaded by Denmark and signed by digital ministers from other EU tech heavyweights such as France, Finland and Estonia, the signatories call on the Commission to incentivise the development of next-gen AI technologies, rather than put up barriers.

“We should turn to soft law solutions such as self-regulation, voluntary labelling and other voluntary practices as well as robust standardisation process as a supplement to existing legislation that ensures that essential safety and security standards are met,” the paper noted.

“Soft law can allow us to learn from the technology and identify potential challenges associated with it, taking into account the fact that we are dealing with a fast-evolving technology,” it continued.

Along with Denmark, the paper has also been signed by Belgium, the Czech Republic, Finland, France Estonia, Ireland, Latvia, Luxembourg, the Netherlands, Poland, Portugal, Spain and Sweden.

High-risk Artificial Intelligence to be 'certified, tested and controlled,' Commission says

Artificial Intelligence technologies carrying a high-risk of abuse that could potentially lead to an erosion of fundamental rights will be subjected to a series of new requirements, the European Commission announced on Wednesday (19 February).

High-risk AI

The 14-country-strong call comes after the Commission published its Artificial Intelligence White Paper in February, which stipulated a future regulatory framework for next-generation technology.

Of particular note in the executive’s plans, a series of ‘high-risk’ technologies were earmarked for future oversight, including those in ‘critical sectors’ and those deemed to be of ‘critical use.’

Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include such technologies with a risk of death, damage or injury, or with legal ramifications. Sanctions could be imposed should certain technologies fail to meet such requirements.

At the time, the Commission had also floated the idea of introducing a ‘voluntary labelling scheme’ for AI technology not considered to be of a particular high-risk.

This option, in particular, appears to be palatable for the 14 advocates of a lighter-touch approach to AI regulation, who would, however, prefer to adopt a voluntary labelling scheme for next-generation AI technologies across the board.

Such an instrument, the policy paper states, would “make it visible for potential users – such as citizens, businesses as well as public administrations – which applications are based on secure, responsible and ethical AI and data and therefore which applications to trust.”

Germany calls for tightened AI regulation at EU level

Four months after the European Commission presented its ‘white paper’ on Artificial Intelligence (AI), the German government said it broadly agrees with Brussels but sees a need to tighten up on security. The government is particularly concerned by the fact that only AI applications with “high risk” have to meet special requirements. EURACTIV Germany reports.

Potential German opposition 

However, the softer policy angle advocated for by nations today could come into conflict with some of the other positions adopted by EU countries.

Germany, the current chair of the EU presidency, is concerned that the Commission only wants to apply restrictions on AI applications deemed to be of high-risk, and would prefer a much broader scope for technologies that would be subject to new rules.

Berlin is also concerned that the Commission’s current plans would lead to a situation in which “certain high-risk uses would not be covered from the outset if they did not fall under certain sectors.”

Moreover, Germany’s June position also made clear reference to the risks to civil liberties posed by biometric remote identification tech, noting how they could lead to a potential encroachment on fundamental rights.

Owning to concerns raised by German civil society, Interior Minister Horst Seehofer has previously had to reign in Germany’s plans to rollout out facial recognition systems across the country, due to fears over the right to privacy and also potential breaches of the EU’s General Data Protection Regulation, Article 4 (14) of which covers the processing of biometric data.

Biometric identification

The position paper makes no mention of the use of AI in biometric identification, nor does it address the precarious issue regulating the use of facial recognition technologies in public spaces – an area of which the Commission had previously been weighing up whether or not to introduce new rules for.

For the EU executive’s part, however, it has not entirely ruled out the option of introducing rules for facial recognition technology.

Speaking to MEPs on the European Parliament’s Internal Market Committee in early September, Kilian Gross of the Commission’s DG Connect said that all options were still on the table.

Responding to a question from Pirate MEP Marcel Kolaja on whether a potential ban is still in the offing, Gross said that “we will not exclude any option, we will look into all options and will carefully analyse existing legislation.”

A follow-up to the Commission’s February White paper on Artificial Intelligence, where new measures could potentially be introduced, is currently slated for early 2021.

Commission will ‘not exclude’ potential ban on facial recognition technology

The European Commission has not ruled out a future ban on the use of facial recognition technology in Europe, as the EU executive mulls the findings of a recent public consultation on Artificial Intelligence.

[Edited by Sam Morgan]

Read more with Euractiv

Subscribe to our newsletters

Subscribe