view all news & events

09.11.2023

"KI-Flash": Outlook on the legal requirements of the European AI Regulation

Having already highlighted the data protection requirements for training an AI in our last AI Flash, we would like to continue to provide you with legal impulses at regular intervals. As time is a rare commodity in today's society, we want to get straight to the point with our "AI Flash" and summarize the legal challenges briefly and concisely:

Today's topic: Outlook on the legal requirements of the European AI Regulation

In this article, we want to provide an outlook on the legal requirements of the European AI Regulation ("AI Regulation").

Current status of the legislation

The AI Regulation, which is based on a proposal by the EU Commission dated April 21, 2021, is currently being negotiated in the so-called trilogue procedure. This means that the text of the AI Regulation has not yet been finalized. It is also currently unclear when the final text will be adopted by the relevant committees. According to current press releases, the trilogue procedure should be completed at the end of 2023 / beginning of 2024, meaning that the AI Regulation may be adopted in 2024 and then published in the Official Journal of the EU. This publication is likely to be followed by a transitional period before the AI Regulation ultimately becomes directly applicable law and companies must comply with it.

Risk-based approach as a central element

A central element will be the risk-based approach to the use of AI applications. AI models that could pose a significant risk to people (e.g. to health, safety and/or fundamental rights) should be subject to strict requirements. To this end, strict requirements should apply to risk management and data governance in particular.

In order to make this risk-based approach more manageable in practice, there might be a list by the EU Commission. This list could contain examples of AI applications with a high risk and examples of AI applications with a lower risk.

Transparency and provision of information for users

Another important point is transparency for users and the provision of meaningful information. High-risk AI applications should be designed and developed in such a way that their operation is sufficiently transparent. An average user should be able to interpret and use the results of the AI application (i.e. the output) appropriately. This also means that this information must be understandable, not just for people with specialist knowledge. Instructions for use should be provided for this purpose.

Supervision by a human being

At the very least, high-risk AI applications must be developed in such a way that a human can effectively supervise them and, if necessary, intervene in the processing. This requirement is roughly comparable to the provision in Article 22 (3) GDPR. According to this, a human must be able to control the result of an automated decision-making in certain cases and, if necessary, make a different decision.

Monitoring and prohibitions

It is not yet clear which authority or authorities will be responsible for monitoring AI applications. Most likely, there will be different regulations in the individual member states. It is also unclear whether there will be specific bans as part of the AI Regulation, i.e. whether the legislator ultimately wants to ban certain AI applications from the outset.

Current market developments are faster than the trilogue procedure

Overall, it can be observed that AI applications that are already available on the market are developing rapidly while the trilogue procedure is ongoing. The AI Regulation may therefore contain provisions that apply to all providers of AI applications and are aimed at ensuring that users can use an AI application with (greater) legal certainty. This would allow the AI Regulation to react more flexibly to current and future developments compared to deciding on a general classification of AI applications.

Final assessment

We assume that the AI Regulation will happen. The stakeholders will certainly work out details in the trilogue. However, we also assume that requirements for transparency and the legality of the use of an AI application, at least comparable to the EU Commission's original draft, will ultimately become applicable law, probably in 2025 / 2026.

Our next AI Flash will go into more detail on the risk-based approach of the European AI Regulation.

Authors

Marius Drabiniok

Marius Drabiniok

Associate

visit profile
Oliver Hornung

Dr. Oliver Hornung

Partner

visit profile
Stefan Peintinger

Dr. Stefan Peintinger

Partner

visit profile