A proposal to improve the European AI Act

***

Europe is getting ready to regulate artificial intelligence. The European Commission presented its proposal for regulation in April 2021 and the European Parliament published a first interim report in April 2022. The discussions are ongoing, and the stakes are enormous. There is no doubt that Europe will be the first continent to regulate AI. But we still need to find an approach that responds to the problems created by this technology while preserving innovation in Europe. In this video, I want first to explain the logic of the AI Act, and second, explore how to improve it.

The logic

As it stands, there is every reason to believe the AI Act will resemble the General Data Protection Regulation (“GDPR”). The GDPR has been the subject of numerous empirical studies that have found a negative impact on innovation in Europe, particularly for startups. The latest study, “GDPR and the Lost Generation of Innovative Apps” (NBER, May 2022), estimates that one-third of mobile apps disappeared upon GDPR implementation.

Like GDPR, the AI Act will apply to a large number of companies in Europe. There are two overinclusive conditions to fall within the scope of the regulation. First, the company must use an AI system such as defined by the European Commission. The definition of AI is very, very broad. AI includes machine learning systems, but also expert systems, “logic and knowledge-based approaches” and statistical calculations. Two, the company must operate in a risky sector such as defined in the regulation. The riskier the sector, the heavier the regulation. For example, companies operating in “high risk” areas such as health or education will have to submit their AI system to a national agency for validation, before the system is released, whenever it is modified, and every five years.

A proposal

Now, how to improve the AI Act in a way to address the issues while maintaining European innovation? I propose the AI Act combines the sectorial approach the European Commission is promoting with a technical approach that looks at how AI works.

Here’s why. Not all AI systems present the same degree of risk. Supervised learning systems allow for the automation of tasks while controlling the result. A company can train a supervised system to differentiate between cats and dogs. Once the training is completed, the system will be able to identify cats and dogs every time a new picture is submitted. So… supervised systems are very unlikely to drift.

The risk is much heavier for deep learning systems that allow the machine to sort the data according to categories that are not predefined. To come back to my example, the system could classify photos of cats and dogs according to their skin color, size, etc. These classifications can be useful, but they can also be inappropriate and lead to all kinds of discrimination. Deep learning is therefore in itself much riskier than supervised learning.

That explains why I propose European institutions combine their sectoral approach with a more technical approach. More specifically, I propose that only companies that operate in a high-risk sector and use a “creative” AI system should be subject to the strictest obligations. There is no need to put such a heavy burden on other companies. By adopting this double approach, AI risks would be equally reduced and innovation would be further safeguarded.

Thibault Schrepel
@ProfSchrepel

Suggested citation:
Thibault Schrepel, A proposal to improve the European AI Act, Network Law Review (May 30, 2022)

Related Posts