view all news & events
04/24/2025

KI Flash: Consultation of the AI Office on the pre-paration of guidelines for GPAIM

Following on from our last AI Flash, in which we reported on legal issues relating to the use of AI tools, we would like to continue to provide you with legal advice at regular intervals.

 

Today's topic: Consultation of the AI Office on the preparation of guidelines for GPAIM

 

On 22 April 2025, the AI Office of the European Commission launched a consultation on the preparation of guidelines for GPAIM (see the official press release here). The background to the consultation is the provisions of Art. 51 et seq. of the AI Act, which regulate the development of general purpose AI models (GPAIM) and will apply from 2 August 2025.

The aim of the consultation is to involve steak stakeholders with relevant specialist knowledge and expertise (e.g. industry associations and GPAIM providers) in the process of developing guidelines. The consultation will run until 22 May 2025, while publication of the finalised guidelines is planned for May or June 2025. The guidelines are intended to supplement the practical guide (see Art. 56 AI Act), which is also currently being consulted on, and provide further assistance for practitioners.

Even if the current working documents of the AI Office have naturally not yet been finalised and a binding interpretation of the AI Act is always the responsibility of the European Court of Justice (ECJ), some legal classifications of the AI Office can already be derived, which will be presented in this AI Flash.

 

When is an AI model a GPAIM?

The question of whether an AI model is to be considered a GPAIM depends primarily on whether it ‘that displays significant generality and is capable of competently performing a wide range of distinct tasks“. The clarification of these requirements is of fundamental importance, as only AI models that are categorised as GPAIM are subject to the requirements of the AI Act.

The AI Office currently assumes that an AI model that can generate text and/or images is to be regarded as a GPAIM if its training calculation exceeds 10^22 FLOPs (= floating-point operations). According to Art. 3 No. 67 AI Act, floating point operations are

‘any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base;’

AI models that generate neither text nor images can be categorised as GPAIM if they have a degree of generality comparable to the AI models primarily considered by the AI Office for generating images and/or text.

The AI Office's working documents contain various calculation options and associated examples that can be used to estimate the number of FLOPs. In particular, a distinction is made between a hardware-based approach and an architecture-based approach. In principle, providers of AI models should be able to choose freely between the two calculation methods, whereby further requirements are set for the type and timing of the calculation.

It is important to note that the presumption rules are explicitly rebuttable based on the above threshold. If the training calculation reaches the above-mentioned threshold, it is therefore initially assumed that the AI model has sufficient generality to be categorised as a GPAIM. However, this only applies if there are no indications to the contrary. According to the AI Office, whether an AI model has sufficient generality and is able to perform a wide range of different tasks competently depends not only on the training calculation, but also on the modality and other characteristics of the data used for training. For example, according to the AI Office, an AI model that is only suitable for transcribing speech should not be considered a GPAIM, even if its training computation reaches the above-mentioned threshold.

 

Differentiation between AI model and model version

Since, according to recital 97 AI Act, GPAIMs can be ‘further modified or fine-tuned into new models’, the question arises as to where exactly the boundary to the development of a (new) independent GPAIM lies, particularly in the case of fine-tuning. The question has already been the subject of numerous discussions, with different characteristics being used to draw the line.

The AI Office currently assumes that changes to an AI model are only to be regarded as an independent development if the changes require more than one third of the computing power required to categorise the model as a GPAIM. This means that the computing power for fine-tuning would have to exceed the value 3 * 10^21 FLOPs in order to justify classifying the modified AI model as a (new) GPAIM. In contrast, further developments that are below the aforementioned threshold should only be categorised as a new model version.

The question of whether it is an independent development of a GPAIM or merely the creation of a new model version also plays a decisive role in determining the relevant obligations. According to recital 109 AI Act, ‘the obligations for providers of general-purpose AI models should be limited to that modification or fine-tuning, for example by complementing the already existing technical documentation with information on the modifications, including new training data sources, as a means to comply with the value chain obligations provided in this Regulation.’

The AI Office's approach to drawing boundaries is very ‘technical’, but the result is consistent. The AI Office's working documents expressly point out that although the training calculation can only be regarded as an imperfect indicator for determining GPAIM, it currently offers the greatest degree of legal certainty. However, the AI Office expressly points out in its working documents that the threshold values used and their calculation may (have to) be adjusted again in future.

 

Who is the provider of the GPAIM?

From a practical point of view, the question of who can be considered as a provider of a GPAIM and must therefore implement the obligations of Art. 51 et seq. AI Act must be implemented.

In order to determine whether a company is to be regarded as a provider of a GPAIM, the respective GPAIM must be placed on the market by the company. According to Art. 3 No. 9 AI Act, this is the first making available of the GPAIM on the Union market, whereby the GPAIM must be ‘supplied’ in return for payment or free of charge in the course of a business activity. Placing on the market therefore primarily focuses on the provision of the GPAIM to external third parties - from the provider's perspective - so that the purely internal use of AI models is at least not primarily covered. Recital 97 AI Act, however, states verbatim:

‘This Regulation provides specific rules for general-purpose AI models and for general-purpose AI models that pose systemic risks, which should apply also when these models are integrated or form part of an AI system. It should be understood that the obligations for the providers of general-purpose AI models should apply once the general-purpose AI models are placed on the market. When the provider of a general-purpose AI model integrates an own model into its own AI system that is made available on the market or put into service, that model should be considered to be placed on the market and, therefore, the obligations in this Regulation for models should continue to apply in addition to those for AI systems. The obligations laid down for models should in any case not apply when an own model is used for purely internal processes that are not essential for providing a product or a service to third parties and the rights of natural persons are not affected. Considering their potential significantly negative effects, the general-purpose AI models with systemic risk should always be subject to the relevant obligations under this Regulation.’

This system (consisting of exceptions and re-exceptions) must therefore be examined in each individual case. Only in this way can it be determined with certainty whether provider status can also be considered for purely internal use of the GPAIM. Details on this are not yet included in the current working papers of the AI Office, which is why further developments must be kept under review.

However, the AI Office has already developed a number of examples where it should be assumed that the GPAIM has been placed on the market:

  • Provision of the GPAIM via a programming library
  • Provision of the GPAIM via a programming interface (API)
  • Provision of the GPAIM for direct download
  • Provision of a physical copy of the GPAIM or upload of the GPAIM to a third party's own infrastructure
  • Integration of the GPAIM into a chatbot that can be accessed on a public website or in an app
  • Integration of the GPAIM into a product or service offered on the market

 

Exceptions for open source

Recital 102 AI Act states that ‘The providers of general-purpose AI models that are released under a free and open-source licence, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available should be subject to exceptions as regards the transparency-related requirements imposed on general-purpose AI models, unless they can be considered to present a systemic risk’. The AI Act therefore provides for exemptions for certain providers of GPAIM - which do not pose a systemic risk - when determining the relevant obligations.

According to the AI Office, GPAIM providers must fulfil the following conditions in order to benefit from exemptions:

  • The GPAIM is published under a free and open source licence that allows access, use, modification and distribution of the AI model;
  • The parameters, including the weights, the information on the model architecture and the information on the use of the AI model are made publicly available;
  • The GPAIM is not subject to systemic risk.

The AI Office working papers already contain further explanations of all the requirements mentioned.

 

Importance of practice guidelines and position of the AI Office

In its working paper, the AI Office also briefly discusses the importance of practice guidelines and its own position as a supervisory authority.

The AI Office is responsible for checking the requirements for providers of GPAIM (see Art. 88 AI Act). The same applies to providers of AI systems that are technically based on a GPAIM, provided that the same provider is involved in both cases (see Art. 75 para. 1 AI Act). The AI Office itself states that it wishes to pursue the most cooperative and proportionate approach possible when enforcing the AI Regulation. It remains to be seen how this will play out in practice.

Pursuant to Art. 53 para. 4 and Art. 55 para. 2 AI Act, compliance with approved codes of practice is in any case a suitable means of ensuring compliance with the requirements of the AI Act. The signing of corresponding practice guidelines is therefore intended in particular to provide simplified proof. The AI Office expressly points out that if companies sign a code of practice, they should be able to rely on the fact that regulatory audits are limited to compliance with these codes of practice. By contrast, providers that do not sign a corresponding code of practice must demonstrate by other appropriate, effective and proportionate means that they implement the requirements of the AI Act.

 

Practical note

Artificial intelligence is becoming increasingly important. From a data protection perspective, numerous statements have already been published by data protection supervisory authorities that deal with both the development and use of AI. The European Data Protection Board also refers to the topic of AI several times in its current activity report for 2024 (published on 23 April 2025). Due to the gradual validity of the AI Act, the (further) regulatory requirements are now also picking up speed.

Even if the topic of GPAIM - and the development of AI in general - is often shifted to the area of responsibility of tech giants, there are a large number of practical constellations in which SMEs can also take on the role of AI provider. In particular, when fine-tuning AI models and depending on how AI is used, ‘developing’ and ‘placing on the market’ within the meaning of the AI Act may need to be considered.

Our recommendation can therefore only be that companies deal with the regulatory requirements as early as possible and have a concept in place for the development and use of AI. The deadline for GPAIM on 2 August 2025 is getting closer and closer, meaning that basic requirements should already be known now - despite some existing transitional provisions and regulations to protect existing requirements.

 

Feel free to contact us if you have any questions about developing or using AI!

    Share

  • LinkedIn
  • XING