Retrouvez-nous sur :

European approach to Artificial Intelligence by Margrethe VESTAGER

Remarks by the Executive Vice-President Margrethe VESTAGER, Commissioner for Competition

©Europaparl Media Center

We adopt a landmark proposal of this Commission: our first ever legal framework on Artificial Intelligence. It is part of a wider package, which also includes a new coordinated plan with Member States, and a new Machinery regulation. This package presents our European approach to Artificial Intelligence.

It is the result of three years of work. In 2018, we launched the European AI Strategy. An “AI Alliance” of 4,000 stakeholders was gathered to debate the societal implications of Artificial Intelligence. And our High-Level Expert Group on AI developed Guidelines for Trustworthy Artificial Intelligence. This led to the publication of our White Paper on Artificial Intelligence in February 2020. The White Paper triggered a wide public debate and received more than 1,200 direct inputs, from all sides of our society, from large and small tech businesses to NGOs and universities.

Today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence, and the use of it.

As we put it in our White Paper on AI, an ecosystem of trust goes hand in hand with an ecosystem of excellence. On the one hand, our regulation addresses the human and societal risks associated with specific uses of AI. This is to create trust. On the other hand, our coordinated plan outlines the necessary steps that Member States should take to boost investments and innovation. To guarantee excellence. All this, to ensure that we strengthen the uptake of AI across Europe.

Already today, Artificial Intelligence is bringing great benefits to our lives, and we’ve only seen the beginning of it. As we speak, it is saving lives thanks to better diagnosis. It is contributing to deliver on our Green Deal by helping farmers to optimise their resources. It is even supporting engineers to rebuild the Notre Dame cathedral in Paris.

Those examples all illustrate what we want AI in Europe to be: a force for progress. But to make that possible, we need to build trust in AI systems. We can only reap the full benefits of AI’s societal and economic potential if we trust we can mitigate the associated risks.

To do so, our proposed legal framework doesn’t look at AI technology itself. Instead, it looks at how AI is used, and what for.

It takes a proportionate and risk-based approach, grounded on one simple logic: the higher the risk that a specific AI may cause to our lives, the stricter the rules.

On that basis, we classify the use of AI into four different categories.

As in a pyramid, at the large bottom we find the great majority of AI applications that represent minimal or no risk at all. For instance, filters that recognise spam messages and block them from entering our inbox. Or in a factory to minimise the amount of waste to optimise the use of resources. Our legal framework allows a free use of such applications, without any restrictions on top of the legal rules that already exist to protect consumers.

A bit higher up the pyramid are the limited-risk uses of AI, such as a chatbot that helps us book a ticket or find the closest store to buy something we want. Those are also allowed but subject to transparency obligations. The aim is to make it crystal-clear to users that they are interacting with a machine.

Getting closer to the top of the pyramid, we find the “high-risk” uses of AI. This is the main focus of our framework. The uses are considered high-risk because they interfere with important aspects of our lives. We are for example talking about Artificial Intelligence that filter through candidates’ curriculums for education and job applications. Or systems that will assess whether someone is worthy enough to get a mortgage from the bank. Or a software that is used in self-driving cars or medical devices, and which might bring new risks to our safety and health.

Those AI systems will be subject to a new set of five strict obligations because they could potentially have a huge impact on our lives.

  • 1/ AI providers are required to feed their systems with high-quality data to make sure the results don’t come out biased or discriminating.
  • 2/ They also need to give detailed documentation about how their AI systems work, for authorities to assess their compliance.
  • 3/ Providers must share substantial information with users to help them understand and properly use AI systems.
  • 4/ they have to ensure an appropriate level of human oversight both in the design and implementation of the Artificial Intelligence.
  • 5/ and finally, they must respect the highest standards of cybersecurity and accuracy.

At the top of the pyramid, we find those – limited – uses of AI that we prohibit altogether because we simply consider them unacceptable. It is AI systems that use subliminal techniques to cause physical or psychological harm to someone. For example, in the case of a toy that uses voice assistance to manipulate a child into doing something dangerous. Such uses have no place in Europe. We therefore propose to ban them.

And the same prohibition applies to AI applications that go against our fundamental values. For instance, a social scoring system that would rank people based on their social behaviour. A citizen that would violate traffic rules or pay rents too late would have a poor social score. That would then influence how authorities interact with him, or how banks treat his credit request.

It only makes sense to propose rules that actually can be enforced. So the proposal also contains provisions on how compliance with these requirements will be ensured.

National authorities will be responsible for assessing whether AI systems meet their obligations. Each within its own competence. It will be for Member States to identify which national authority is the best placed. For example, obligations concerning privacy would be assessed by national data protection authorities, whereas unsafe products would fall within the remit of market surveillance authorities, they would be the one to decide to remove such products from the market.

Sanctions will apply in case of persistent non-compliance. As such, an AI provider that would not comply with the prohibition of an artificial intelligence practices can be fined up to 6 per cent of its yearly global turnover.

One specific use of AI fits in both the high risk and the prohibited categories. It is remote biometric identification. Biometric identification can be used for many purposes. Some of them are not problematic. For instance, when it is used at border controls by customs authorities, or whenever we are asked to sign with our fingerprints or by face recognition.

In our proposal, we focus on remote biometric identification, where many people are being screened simultaneously.  We treat any use of it as highly risky from a fundamental rights point of view. That’s why we subject remote biometric identification to even stricter rules than other high-risk use cases.

But there is one situation where that may not be enough. That’s when remote biometric identification is used in real-time by law enforcement authorities in public places. There is no room for mass surveillance in our society. That’s why in our proposal, the use of biometric identification in public places is prohibited by principle. We propose very narrow exceptions that are strictly defined, limited and regulated. Those are extreme cases such as when police authorities need it in search for a missing child.

Altogether, this legal framework shapes the trust we must build if we want people and businesses to embrace AI solutions.

As defined in our strategy on Europe’s digital future, an ecosystem of trust goes together with an ecosystem of excellence. For Europe to become a global leader in trustworthy AI, we need to give businesses access to the best conditions to build advanced AI systems.

This is the idea behind our revised coordinated plan on Artificial Intelligence.  It coordinates the investments across Member States to ensure that money from Digital Europe and Horizon Europe programmes is spent where we need it the most.  For instance, in in high-performance computing or to create facilities to test and improve AI systems.

We also identify high-impact sectors where the European Commission, Member States and private actors join their efforts to accelerate the development of critical AI. That’s for instance in the green economy. Today, we are working to improve smart sensors to help farmers produce more food, more sustainably and save money at the same time. They can do so by growing the right crops in the right place and at the right time, using dedicated algorithms.

Finally, a new Regulation for Machinery Products will be put forward to adapt our safety rules to this new generation of AI products and services.

To conclude – as always when we propose regulation, time matters. Today also. Europe may not have been the leader in the last wave of digitalisation but it has it all to lead the next one. It will be built on industrial data. And it will bring Artificial Intelligence into industries like manufacturing, clean energy and healthcare – where Europe excels. So, we must make sure to get it right, as soon as possible. So, no matter how fast technologies evolve, AI is always aligned with who we are and what we stand for.

Spread the news