Search results for
Lawyers
Dr. Oliver Hornung advises national and international IT service providers and users in the legal structuring and negotiation of IT, project, and outsourcing contracts, as well as in matters of copyright and licensing. He is also regularly involved in distressed projects (dispute management) and advises clients in conciliation and arbitration proceedings and, where necessary, in litigation.
Due to his industry focus on digital business, Dr. Oliver Hornung advises his clients on all legal matters dealing with cloud computing, Big Data, Industry 4.0, and FinTech.
Another focus of his practice is data protection and IT compliance. Dr. Oliver Hornung advises cloud providers and users in developing national and international data protection concepts and on IT compliance. He currently advises on numerous projects to implement the EU General Data Protection Regulation.
Finally, Dr. Oliver Hornung advises start-ups on all questions relating to IT law and data protection law. In addition to his extensive practical work, Dr. Oliver Hornung is also a frequently requested lecturer in IT law and data protection law.
Norbert Klingner specializes in national and international movie/TV and advertising film production, financing, insurance, and distribution. He represents well-known producers, distributors, global distributors, and movie financing entities. His expertise ranges from negotiating and drafting contracts from the beginning of the material development to all matters related to production and financing up to the strategically correct exploitation and licensing. A selection of the film productions in which Mr. Klingner was involved can be found on the Internet Movie Database IMDb.
Margret Knitter advises her clients in all matters of intellectual property and competition law. This includes not only strategic advice, but also legal disputes. Her practice focuses on the development and defense of trademark and design portfolios, border seizure proceedings and advice on developing marketing campaigns. She advises on labelling obligations, packaging design, marketing strategies and regulatory questions, in particular for cosmetics, detergents, toys, foodstuffs and Cannabis. She represents her clients vis-à-vis authorities, courts and the public prosecutor's office.
In the field of media and entertainment, she mainly advises on questions of advertising law, in particular product placement, branded entertainment and influencer marketing. She is a member of the board of the Branded Content Marketing Association (BCMA) for the DACH region and member of the INTA Non-Traditional Marks Committee.
News
KI Flash: Consultation of the AI Office on the pre-paration of guidelines for GPAIM
Following on from our last AI Flash, in which we reported on legal issues relating to the use of AI tools, we would like to continue to provide you with legal advice at regular intervals.
Today's topic: Consultation of the AI Office on the preparation of guidelines for GPAIM
On 22 April 2025, the AI Office of the European Commission launched a consultation on the preparation of guidelines for GPAIM (see the official press release here). The background to the consultation is the provisions of Art. 51 et seq. of the AI Act, which regulate the development of general purpose AI models (GPAIM) and will apply from 2 August 2025.
The aim of the consultation is to involve steak stakeholders with relevant specialist knowledge and expertise (e.g. industry associations and GPAIM providers) in the process of developing guidelines. The consultation will run until 22 May 2025, while publication of the finalised guidelines is planned for May or June 2025. The guidelines are intended to supplement the practical guide (see Art. 56 AI Act), which is also currently being consulted on, and provide further assistance for practitioners.
Even if the current working documents of the AI Office have naturally not yet been finalised and a binding interpretation of the AI Act is always the responsibility of the European Court of Justice (ECJ), some legal classifications of the AI Office can already be derived, which will be presented in this AI Flash.
When is an AI model a GPAIM?
The question of whether an AI model is to be considered a GPAIM depends primarily on whether it ‘that displays significant generality and is capable of competently performing a wide range of distinct tasks“. The clarification of these requirements is of fundamental importance, as only AI models that are categorised as GPAIM are subject to the requirements of the AI Act.
The AI Office currently assumes that an AI model that can generate text and/or images is to be regarded as a GPAIM if its training calculation exceeds 10^22 FLOPs (= floating-point operations). According to Art. 3 No. 67 AI Act, floating point operations are
‘any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base;’
AI models that generate neither text nor images can be categorised as GPAIM if they have a degree of generality comparable to the AI models primarily considered by the AI Office for generating images and/or text.
The AI Office's working documents contain various calculation options and associated examples that can be used to estimate the number of FLOPs. In particular, a distinction is made between a hardware-based approach and an architecture-based approach. In principle, providers of AI models should be able to choose freely between the two calculation methods, whereby further requirements are set for the type and timing of the calculation.
It is important to note that the presumption rules are explicitly rebuttable based on the above threshold. If the training calculation reaches the above-mentioned threshold, it is therefore initially assumed that the AI model has sufficient generality to be categorised as a GPAIM. However, this only applies if there are no indications to the contrary. According to the AI Office, whether an AI model has sufficient generality and is able to perform a wide range of different tasks competently depends not only on the training calculation, but also on the modality and other characteristics of the data used for training. For example, according to the AI Office, an AI model that is only suitable for transcribing speech should not be considered a GPAIM, even if its training computation reaches the above-mentioned threshold.
Differentiation between AI model and model version
Since, according to recital 97 AI Act, GPAIMs can be ‘further modified or fine-tuned into new models’, the question arises as to where exactly the boundary to the development of a (new) independent GPAIM lies, particularly in the case of fine-tuning. The question has already been the subject of numerous discussions, with different characteristics being used to draw the line.
The AI Office currently assumes that changes to an AI model are only to be regarded as an independent development if the changes require more than one third of the computing power required to categorise the model as a GPAIM. This means that the computing power for fine-tuning would have to exceed the value 3 * 10^21 FLOPs in order to justify classifying the modified AI model as a (new) GPAIM. In contrast, further developments that are below the aforementioned threshold should only be categorised as a new model version.
The question of whether it is an independent development of a GPAIM or merely the creation of a new model version also plays a decisive role in determining the relevant obligations. According to recital 109 AI Act, ‘the obligations for providers of general-purpose AI models should be limited to that modification or fine-tuning, for example by complementing the already existing technical documentation with information on the modifications, including new training data sources, as a means to comply with the value chain obligations provided in this Regulation.’
The AI Office's approach to drawing boundaries is very ‘technical’, but the result is consistent. The AI Office's working documents expressly point out that although the training calculation can only be regarded as an imperfect indicator for determining GPAIM, it currently offers the greatest degree of legal certainty. However, the AI Office expressly points out in its working documents that the threshold values used and their calculation may (have to) be adjusted again in future.
Who is the provider of the GPAIM?
From a practical point of view, the question of who can be considered as a provider of a GPAIM and must therefore implement the obligations of Art. 51 et seq. AI Act must be implemented.
In order to determine whether a company is to be regarded as a provider of a GPAIM, the respective GPAIM must be placed on the market by the company. According to Art. 3 No. 9 AI Act, this is the first making available of the GPAIM on the Union market, whereby the GPAIM must be ‘supplied’ in return for payment or free of charge in the course of a business activity. Placing on the market therefore primarily focuses on the provision of the GPAIM to external third parties - from the provider's perspective - so that the purely internal use of AI models is at least not primarily covered. Recital 97 AI Act, however, states verbatim:
‘This Regulation provides specific rules for general-purpose AI models and for general-purpose AI models that pose systemic risks, which should apply also when these models are integrated or form part of an AI system. It should be understood that the obligations for the providers of general-purpose AI models should apply once the general-purpose AI models are placed on the market. When the provider of a general-purpose AI model integrates an own model into its own AI system that is made available on the market or put into service, that model should be considered to be placed on the market and, therefore, the obligations in this Regulation for models should continue to apply in addition to those for AI systems. The obligations laid down for models should in any case not apply when an own model is used for purely internal processes that are not essential for providing a product or a service to third parties and the rights of natural persons are not affected. Considering their potential significantly negative effects, the general-purpose AI models with systemic risk should always be subject to the relevant obligations under this Regulation.’
This system (consisting of exceptions and re-exceptions) must therefore be examined in each individual case. Only in this way can it be determined with certainty whether provider status can also be considered for purely internal use of the GPAIM. Details on this are not yet included in the current working papers of the AI Office, which is why further developments must be kept under review.
However, the AI Office has already developed a number of examples where it should be assumed that the GPAIM has been placed on the market:
- Provision of the GPAIM via a programming library
- Provision of the GPAIM via a programming interface (API)
- Provision of the GPAIM for direct download
- Provision of a physical copy of the GPAIM or upload of the GPAIM to a third party's own infrastructure
- Integration of the GPAIM into a chatbot that can be accessed on a public website or in an app
- Integration of the GPAIM into a product or service offered on the market
Exceptions for open source
Recital 102 AI Act states that ‘The providers of general-purpose AI models that are released under a free and open-source licence, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available should be subject to exceptions as regards the transparency-related requirements imposed on general-purpose AI models, unless they can be considered to present a systemic risk’. The AI Act therefore provides for exemptions for certain providers of GPAIM - which do not pose a systemic risk - when determining the relevant obligations.
According to the AI Office, GPAIM providers must fulfil the following conditions in order to benefit from exemptions:
- The GPAIM is published under a free and open source licence that allows access, use, modification and distribution of the AI model;
- The parameters, including the weights, the information on the model architecture and the information on the use of the AI model are made publicly available;
- The GPAIM is not subject to systemic risk.
The AI Office working papers already contain further explanations of all the requirements mentioned.
Importance of practice guidelines and position of the AI Office
In its working paper, the AI Office also briefly discusses the importance of practice guidelines and its own position as a supervisory authority.
The AI Office is responsible for checking the requirements for providers of GPAIM (see Art. 88 AI Act). The same applies to providers of AI systems that are technically based on a GPAIM, provided that the same provider is involved in both cases (see Art. 75 para. 1 AI Act). The AI Office itself states that it wishes to pursue the most cooperative and proportionate approach possible when enforcing the AI Regulation. It remains to be seen how this will play out in practice.
Pursuant to Art. 53 para. 4 and Art. 55 para. 2 AI Act, compliance with approved codes of practice is in any case a suitable means of ensuring compliance with the requirements of the AI Act. The signing of corresponding practice guidelines is therefore intended in particular to provide simplified proof. The AI Office expressly points out that if companies sign a code of practice, they should be able to rely on the fact that regulatory audits are limited to compliance with these codes of practice. By contrast, providers that do not sign a corresponding code of practice must demonstrate by other appropriate, effective and proportionate means that they implement the requirements of the AI Act.
Practical note
Artificial intelligence is becoming increasingly important. From a data protection perspective, numerous statements have already been published by data protection supervisory authorities that deal with both the development and use of AI. The European Data Protection Board also refers to the topic of AI several times in its current activity report for 2024 (published on 23 April 2025). Due to the gradual validity of the AI Act, the (further) regulatory requirements are now also picking up speed.
Even if the topic of GPAIM - and the development of AI in general - is often shifted to the area of responsibility of tech giants, there are a large number of practical constellations in which SMEs can also take on the role of AI provider. In particular, when fine-tuning AI models and depending on how AI is used, ‘developing’ and ‘placing on the market’ within the meaning of the AI Act may need to be considered.
Our recommendation can therefore only be that companies deal with the regulatory requirements as early as possible and have a concept in place for the development and use of AI. The deadline for GPAIM on 2 August 2025 is getting closer and closer, meaning that basic requirements should already be known now - despite some existing transitional provisions and regulations to protect existing requirements.
Feel free to contact us if you have any questions about developing or using AI!
SKW Schwarz supports Forest Stewardship Council with its sustainability statements
Munich/Bonn, April 15, 2025
SKW Schwarz has supported Forest Stewardship Council (FSC®), one of the world's largest certification systems for sustainable forest management, in implementing the requirements of the EU directive on empowering consumers for the green transition for the FSC labelling scheme.
A team of experts from SKW Schwarz with expertise in the areas of sustainability communication, competition law, and green claims has reviewed FSC's established certification system in detail with regards to compliance with the Empowering Consumers Directive (EmpCo; Directive (EU) 2024/825). This ensures that FSC can continue to enable its customers to label and advertise forestry-based products made from materials such as wood, paper and rubber, which originate from sustainable forest management in accordance with FSC's high standards, with transparent, clear sustainability labels and environmental claims in a legally compliant manner.
The EmpCo Directive, which will also apply at a national level from 2026, will increase the legal requirements for environmental and sustainability statements in corporate communications throughout the EU. Generic environmental claims without explanation or specification will no longer be possible; (specific) environmental claims must be explained in detail. Sustainability labels must be based on a certification scheme that is open, transparent and non-discriminatory and whose requirements are monitored by independent third parties.
FSC already uses a comprehensive system of numerous publicly accessible standards and checks to ensure that its labels are only applied to products that demonstrably comply with the principles of responsible forest management.
Nevertheless, it was necessary to have the processes for awarding the label by FSC reviewed to ensure that they also meet the new requirements of the EmpCo Directive.
“In selecting a legal partner, FSC sought a firm with not only the technical expertise but also a deep understanding of our industry; qualities that SKW Schwarz clearly demonstrated. Their ability to navigate the complexities of our certification system, combined with their tailored guidance, has been invaluable,” says Ana-Maria Băban, Commercial Director of FSC.
Dr. Daniel Kendziur, Partner at SKW Schwarz, explains: “As one of many certification schemes that are important for the environment, FSC addressed the important issues associated with EmpCo at an early stage. We are pleased that we were able to support FSC in strategically aligning the certification system with the future legal framework so that FSC and its customers can continue to successfully promote sustainability and forest management.”
About SKW Schwarz
SKW Schwarz is an independent law firm with around 120 lawyers, four offices and a common claim: We think ahead. As a member of TerraLex, the law firm is globally networked and advises in all relevant areas of commercial law. Including in an area that is particularly important for companies: the future. We analyze, create clarity and advise today in the key legal areas of tomorrow.
About the Forest Stewardship Council™ (FSC®)
FSC is a non-profit organization that provides a proven sustainable forest management solution. Currently, over 150 million hectares of forest worldwide is certified according to FSC standards. It is widely regarded as the most rigorous forest certification system among NGOs, consumers, and businesses alike to tackle today’s deforestation, climate, and biodiversity challenges. The FSC forest management standard is based on ten core principles designed to address a broad range of environmental, social and economic factors. FSC’s “check tree” label is found on millions of forest-based products and verifies that they are sustainably sourced, from forest to consumer. For more information, visit www.fsc.org.
Digital Decade Update - What's next on the EU's digital regulation agenda?
The EU's “Digital Decade” is the European Union's central strategy for making Europe digitally competitive by 2030 and driving forward the digital transformation in a targeted manner. It focuses on four key areas: promoting digital skills and skilled workers, expanding secure and sustainable digital infrastructures, supporting the digital transformation in companies and digitizing the public sector.
In order to achieve these ambitious goals, the EU has adopted a series of far-reaching legislative initiatives. We have set up a landing page for the “Digital Decade” on our website, where you can find an overview of the individual legislative initiatives and we provide regular updates on their practical impact. The following overview article summarizes the current status of the most important initiatives.
Data Act
What is it about?
The Data Act (DA) regulates access to and the use of data generated when using networked products and associated services. The aim is to ensure fair access to usage data, break up data monopolies and promote innovation in the internal market. The regulation particularly affects manufacturers of connected devices, providers of digital services and users of these products. Other aims of the Data Act are to simplify the switching of cloud services and improve the interoperability of data.
What is the current status and what is next on the agenda?
- The regulation came into force on January 11, 2024 and will be fully applicable from September 12, 2025.
Sector-specific implementation guidelines (by the Commission or standardization organizations) are currently being developed.
Data Governance Act
What is it about?
The Data Governance Act (DGA) regulates access to and use of personal and non-personal public sector data. In addition, the DGA contains regulations on the activities of data sharing services (so-called “data intermediaries”) and provisions to promote “data altruism”, i.e. the voluntary sharing of data by third parties.
What is the current status and what is next on the agenda?
The regulation came into force on January 11, 2024 and will be fully applicable from September 12, 2025
Regulation on Artificial Intelligence
What is it about?
The Artificial Intelligence Regulation (AI Act) is intended to create comprehensive rules for artificial intelligence systems in Europe. The use of certain AI systems will be completely banned, while others may be used subject to strict compliance requirements and safety measures (high-risk AI systems). The AI Act establishes transparency and information obligations for certain low-risk AI systems. Through ethical and technical standards, the AI Act is intended to create a legal basis for the responsible use of AI in the EU.
What is the current status and what is next on the agenda?
- The regulation came into force on August 1, 2024 and will be fully applicable from August 2, 2026. Certain parts will already be applicable before this date:
- The regulations on prohibited practices (prohibited AI systems) provided for in the AI Act and the AI competence requirements for providers and operators of AI systems will already apply from February 2, 2025.
- The regulations on general purpose AI models (GPAI) will apply from August 2, 2025.
- Various guidelines are to be issued by the Commission, in particular on the implementation and interpretation of the regulation. The guidelines on prohibited practices (Art. 5 AI Act) and the guideline on the definition of an AI system pursuant to Art. 3(1) AI Act have already been available since February 2, 2025.
Accessibility Strengthening Act
What is it about?
The Barrierefreiheitsstärkungsgesetz (BFSG) brings far-reaching changes for the accessibility of products and services. It obliges a large number of economic actors, including manufacturers, importers, retailers and service providers, to meet specific accessibility requirements. The regulations on e-commerce services in particular will affect numerous website and online store operators.
What is the current status and what is next on the agenda?
The BFSG comes into force on June 28, 2025 and is fully applicable from this date.
Digital Services Act
What is it about?
The Digital Services Act (DSA) modernizes the foundations of the E-Commerce Directive, which was issued in 2000, and creates new rules for the Internet, which particularly affect online platforms. The aim is to promote greater transparency, security and European values on the Internet.
What is the current status and what is next on the agenda?
The Digital Services Act came into force on November 16, 2022 and has been fully applicable since February 17, 2024
Markets in Crypto-Assets Regulation
What is it about?
The Markets in Crypto-Assets Regulation (MiCAR) is intended to create a uniform and harmonized regulatory framework for crypto-assets in the European Union. The aim is to make the market for crypto-assets more transparent, secure and efficient. The regulation includes rules for value-referenced tokens, e-money tokens and utility tokens.
What is the current status and what is next on the agenda?
The regulation was adopted in April 2023 and came into force in June 2023. All parts of the regulation have been fully applicable since December 30, 2024.
Digital Operations Resilience Act
What is it about?
The Digital Operations Resilience Act (DORA) places comprehensive requirements on IT security in the financial sector. The aim is to strengthen the digital resilience of the affected companies and thus increase the overall security of the financial sector.
What is the current status and what is next on the agenda?
The DORA came into force on January 17, 2023 and has been fully applicable since January 17, 2025
NIS 2 Directive
What is it about?
The second EU Network and Information Security Directive (NIS-2 Directive) provides for comprehensive cybersecurity requirements for companies in various sectors such as energy, transport, health and digital infrastructure.
SKW Schwarz's NIS 2 tool allows companies to check the extent to which they are affected by the directive.
What is the current status and what is next on the agenda?
- The directive has been in force since the beginning of 2023. The transposition deadline expired on October 17, 2024. The government bill passed last year (NIS2 Implementation and Cybersecurity Strengthening Act) must be passed again by the new government and submitted to the Bundestag (“discontinuity principle”).
The law is currently not expected to be passed again until summer 2025 at the earliest. It can also be assumed that changes will be made to the previous draft bill.
Cyber Resilience Act
What is it about?
The Cyber Resilience Act (CRA) contains requirements for the cyber security of products with digital elements. These include networked hardware and software products as well as essential remote data processing solutions. The requirements of the CRA affect product manufacturers in particular, while importers and retailers must fulfill certain control obligations.
What is the current status and what is next on the agenda?
- The CRA came into force on December 10, 2024. From June 11, 2026, conformity assessment bodies will be able to verify compliance with the safety requirements.
- From September 11, 2026, manufacturers must report actively exploited vulnerabilities of affected products From December 11, 2027, the CRA will be fully applicable.
---
We are familiar with the legal issues, risks and opportunities associated with the new EU legislative initiatives. Please contact us if we can support you with the implementation.
New competition for better data protection - German Federal Court of Justice confirms the right of action of competitors and consumer protection associations regarding GDPR violations!
On March 27, 2025, the Federal Court of Justice (“BGH”) published three rulings with particular significance. In the three groundbreaking decisions, the BGH implements the requirements of the European Court of Justice (“ECJ”) and opens the door to competition law claims by competitors and consumer protection associations in the event of data protection violations. Is this the beginning of a new wave of warning letters?
I. Background
The General Data Protection Regulation (“GDPR”) primarily protects data subjects. If a controller violates GDPR provisions, the data subject is entitled to legal remedies, such as the right to data erasure or, depending on the case, claims for damages. In addition, data protection authorities can take various measures to enforce the GDPR, such as issuing a prohibition order or imposing fines on the controller.
In contrast, it was unclear for a long time whether a competitor could also make a claim against the controller for removal and injunctive relief under the Unfair Competition Act (“UWG”) due to a data protection breach by the controller. Specifically, the question was whether the sanctions regime of the GDPR must be regarded as exhaustive or whether Sections 3 (1), 3a UWG (“Vorsprung durch Rechtsbruch”, “Advantage through breach of law”) can be applied additionally. In the past, German regional courts have judged this in different ways, and the feared wave of warnings following the introduction of the GDPR in 2018 did not materialize so far.
Also, it was not clear for a long time, under which conditions consumer protection associations could invoke a right of action in connection with GDPR infringements.
As expected, these unresolved issues finally reached the BGH. In two parallel proceedings (Ref.: I ZR 222/19 and ZR 223/19), the BGH had to clarify the entitlement of pharmacists in competition to bring claims for GDPR infringements. A third case (Ref.: I ZR 186/17) concerned the right of action of the Federation of German Consumer Organizations (“vzbv”) in a legal dispute against the operator of a social media platform due to breaches of data protection and fair-competition information obligations.
After the BGH had referred all three proceedings to the ECJ for a preliminary ruling, which has since ruled in favor of the applicability of competition law (see our press release dated October 8, 2024 here), the final decisions of the BGH are now also out:
II. BGH, judgment of March 27, 2025, case no. I ZR 186/17 – right of action of consumer associations
In the first case, the vzbv brought an action against the operator of a social media platform.
On the social media platform, users were offered free online games in an “app center”. In November 2012, certain notices were displayed in some of these games under the “Play now” button:
“By clicking “Play Game” above, this application will receive: Your general information (?), Your e-mail address, About you, Your status messages. This application may post on your behalf, including your score and more.” [translated by the authors]
One of the games also stated that the application was allowed to “post status messages, photos and more on your behalf”. The vzbv saw this as a violation of the data protection requirements of the GDPR, as users were not sufficiently informed about the collection and use of their personal data and no required effective consent was obtained.
The ECJ had already ruled in 2022, following a referral from the BGH in this case, that consumer protection associations can also challenge violations of the GDPR on the basis of consumer and competition law. Upon further referral by the BGH, the ECJ specified on 11 July 2024 that violations of the obligation to provide information pursuant to Art. 12 et seq. GDPR and Section 5a UWG may be sufficient to justify a consumer association's right of action.
In its ruling of 27 March 2025, the Federal Court of Justice confirmed that the infringement of the GDPR can be challenged under competition law. The BGH thus dismissed the appeal by the operator of the social media platform. The defendant's conduct constituted a breach of the data protection information obligation under Art. 12 para. 1 sentence 1, Art. 13 para. 1 lit. c), lit. e) GDPR. At the beginning of the game, the users were not sufficiently informed about the type, scope and purpose of the collection and the legal basis for the processing of their data. On the one hand, this constitutes a breach of competition law in terms of withholding material information pursuant to Section 5a (1) UWG. At the same time, the wording “This application may post status messages, photos and more in your name” did not sufficiently fulfill the information obligations under data protection law and was to be classified as an invalid clause pursuant to Section 3 (1) sentence 1 no. 1 of the Act on Injunctions for Consumer Rights and Other Infringements (“UKlaG”). Such clauses can be prohibited in accordance with Section 1 UKlaG.
Thus, the core element of the ruling is the finding that breaches of data protection information obligations can also constitute breaches of competition law. Pursuant to Section 8 (3) No. 3 UWG and Section 3 (1) No. 1 UKlaG, these can be challenged by consumer action associations before the civil courts.
III. BGH, judgments of March 27, 2025, Ref. I ZR 222/19 and I ZR 223/19 - Competitors' right of action
In the second and third parallel case, two pharmacists had brought an action against a competitor. The latter had sold medicines via an online marketplace and processed the personal data of its customers, including customer names and information on the medicines sold. The customers' explicit consent was not obtained for this.
The two plaintiff pharmacists saw this as a violation of Art. 9 GDPR. According to this, explicit consent pursuant to Art. 9 para. 2 lit. a) GDPR was required, which was not given.
According to the BGH, such an infringement can be pursued by competitors by means of an action under unfair competition law. The ECJ recently ruled on this matter in its highly regarded judgment of October 4, 2024, Ref.: C-21/23 (“Lindenapotheke”), as a result of a question referred by the BGH. We have already published a comprehensive article on this judgment in GRUR-Prax (GRUR-Prax 2025, 171).
As a result, it was already decided by the highest court in October last year that the GDPR, in principle, does not stand in the way of a claim for removal and injunctive relief of a data protection breach pursuant to Sections 8 (1), (3) No. 1 in conjunction with 3 (1), 3a UWG.
It was (and still is) unclear which provisions of the GDPR are actually to be regarded as market conduct rules within the meaning of Section 3a UWG. For Art. 9 GDPR, this has now been affirmed by the BGH in the two judgments of March 27, 2025. Art. 9 GDPR not only protects the data subjects' right to informational self-determination, but also serves to protect them as market participants.
IV. Practical consequences
In future, in addition to the data subjects and data protection supervisory authorities, there will be two further potential claimants with regard to possible GDPR infringements: the competitor and the consumer action associations. In this respect, the UWG and the UKlaG serve as instrument for asserting such claims.
Is there now a threat of a new wave of warning letters? This is doubtful. On the one hand, reimbursement of the warning party's expenses is excluded pursuant to Section 13 (4) No. 2 UWG if the warned party generally employs fewer than 250 employees. Secondly, in the case of a first-time infringement, the possibility of agreeing a contractual penalty in accordance with Section 13a (2) UWG is excluded if the warned party generally employs fewer than 100 employees. It is therefore likely to be difficult for warning law firms (“Abmahnkanzleien”) to turn data protection violations into a business model.
Now that the German Federal Court of Justice has already classified Art. 9 GDPR as a market conduct rule within the meaning of Section 3a UWG, it remains to be seen which other GDPR provisions will be classified as such by the courts. This will be particularly exciting with regard to Art. 25, 32 GDPR (privacy-by-design, privacy-by-default and technical and/or organizational measures for the protection of personal data).
In any case, companies should take the three decisions of the Federal Court of Justice from March 27, 2025 as an opportunity to thoroughly examine and safeguard their business models both from a data protection law perspective and from the perspective of unfairness. Companies should pay more attention to ensuring that their privacy notices are up to date.
KI-Flash: DeepSeek AI - Navigating Legal & Cross-Cultural Challenges
Just published in the New York Law Journal: William A. Tanenbaum and Dr. Matthias Orthwein, LL. M. (Boston) examine the Chinese AI model DeepSeek's dual-edged implications for international business.
Their analysis covers:
- Impact of the Thomson Reuters fair use ruling on AI training data
- DeepSeek as insight tool for Chinese business perspectives
- Data confidentiality risks and corporate espionage concerns
- GDPR compliance challenges facing European operations
- Actionable guidance for corporate AI governance
Essential reading for cross-border businesses and legal professionals navigating the evolving AI landscape.
Read the full article here.
We recycle ourselves! - OLG Frankfurt a.M. on mislea-ding advertising of an ecological cleaning agent
The decision of the Federal Court of Justice (BGH) in the ‘climate-neutral’ case on the competition law requirements for advertising with ambiguous, environmentally-related terms is now being implemented by the courts of first instance. After the Higher Regional Court of Cologne commented on CO2-neutral travel, the Higher Regional Court of Frankfurt a.M. now also considers ambiguous advertising statements on the environmental compatibility of recycling material from the yellow bag, on the defendant's recycling efforts and on climate neutrality to be misleading and also applies the standards to web links to further information (Higher Regional Court of Frankfurt a. M. (6th Civil Senate), judgement of 12/19/2024 - 6 U 33/24).
Advertising statements on pollutant content and carbon footprint
The dispute between two manufacturers of ecological laundry detergents and cleaning products before the Frankfurt Higher Regional Court centred on advertising statements made by the defendant about its dishwasher detergent bottles. Specifically, it concerned the statements that these were the ‘first recycled bottles’ from the defendant itself, that it would recycle them itself, and that ‘recycled PE’ from the yellow bag could always contain residues of synthetic fragrances, heavy metals, pesticides, etc.
In addition, the defendant used the ‘climate neutral’ logo of the company ClimatePartners on its homepage, behind which a website with further information could be called up by clicking on it. The plaintiff considered the advertising claims and the presentation of the information on ‘climate neutrality’ to be misleading.
Ambiguous environmental term ‘recycled PE’ not sufficiently explained
The Higher Regional Court of Frankfurt a.M. has now largely ruled in favour of the plaintiff. The term ‘recycled PE’ was ambiguous in the specific context. It could either refer to already recycled polyethylene or the source material or plastic (‘PE’) from the Yellow Bag. If, as the defendant had to accept, the statement was understood in relation to the already finished recyclate, this would be misleading, because this material would in any case not harbour a higher risk of containing heavy metals and/or pesticides if the starting material was sufficiently processed as usual by the plaintiff and in accordance with the state of the art. The defendant should have clearly and unambiguously resolved this ambiguity in the advertising itself.
Environmental contribution must be more than just symbolic
The court also considers the defendant's statement that it recycles itself to be misleading because consumers would expect that a significant return system already exists and that the specific product depicted has outer packaging that consists at least to a significant extent of recycled material. However, the defendant had set up a maximum of 150 return boxes in small and smaller organic markets and the proportion of self-recycled bottles was therefore well below 1%. In addition, a large number of the new bottles advertised were made entirely from virgin plastic.
Clear references to links to further information necessary
The court also commented on the specific presentation of further information on climate neutrality by means of a link behind the climate neutral logo, which was not relevant to the decision. Although a link is permissible in principle, it must be sufficiently clear and unambiguous. However, the public would not expect or regularly simply find out that a corresponding link was hidden behind the logo. A clear reference to the link is therefore necessary.
Conclusion: Requirements for environmental claims further specified
The Higher Regional Court of Frankfurt a.M. continues the case law of the Federal Court of Justice on ‘climate neutral’ and shows that there is a fundamental risk that environmental terms, in this case ‘recycled PE’, have more than one meaning. This ambiguity must therefore be taken into account in environmental advertising and clarified on the spot. Anyone using a link to further information must make sufficient reference to this.
KI-Flash: European Commission guidelines on the concept of an AI system
After reporting on the market surveillance of AI in our last AI Flash, we would like to continue to provide you with legal impulses at regular intervals.
Today's topic: European Commission guidelines on the concept of an AI system
On February 6, 2025, the European Commission published guidelines on the definition of AI systems, among other things. In addition, guidelines on prohibited AI practices under Art. 5 of the AI Regulation were also published, which will be dealt with in a separate AI Flash in the near future.
As expressly stated in the accompanying press release from the European Commission, the guidelines are not binding, but are intended to facilitate the effective application of the provisions of the AI Regulation. A final classification is therefore naturally reserved for a judicial decision by the European Court of Justice (ECJ), which is why further developments should always be closely monitored.
The guidelines are based on the provisions of Art. 96 para. 1 lit. f) of the AI Regulation, according to which the European Commission is obliged to specify the definition of the term “AI system” for practical application. Even though the guidelines have so far only been approved but not formally adopted, the statement contains a large number of welcome examples and aids for application.
The following AI Flash is intended to highlight the key messages of the publication and conclude with some practical tips for providers and operators of AI systems.
The definition of an AI system
According to Art. 3 para. 1 of the AI Regulation, an AI system is
“a machine-based system that is designed to operate with varying degrees of autonomy and that can be adaptive once operational and that derives outputs such as predictions, content, recommendations or decisions that can affect physical or virtual environments from inputs received for explicit or implicit goals.”
The European Commission correctly points out that the definition is based on a total of 7 components, some of which overlap or are mutually dependent. An AI system is classified according to the following characteristics:
- Machine-supported system (1)
- Degree of autonomy (2)
- Adaptability (3)
- Goals of the AI system (4)
- Ability to draw conclusions (5)
- Generation of specific outputs (6)
- Interaction with the environment (7)
These 7 components are further specified in the European Commission's guidelines and provided with corresponding examples. In the following, the individual characteristics of the definition will be examined in detail and also provided with further examples.
(1) Machine-based system
The term “machine-supported” initially refers to the fact that AI systems are developed with machines and operated accordingly. The term “machine” can be understood to include both the hardware and software components that enable the AI system to function at all. The hardware components refer to the physical elements of the machine, such as processing units, memories, storage devices, network units and input and output interfaces that provide the infrastructure for calculations. The software components, on the other hand, comprise computer code, Instructions, programs, operating systems and applications that control the processing of data and the execution of tasks by the hardware.
The requirement “machine-supported” therefore ultimately only underlines the fact that AI systems must be computer-supported and based on machine operations. The term AI system therefore - unsurprisingly - covers a wide range of computing systems.
(2) Degree of autonomy
The second element of the definition states that AI systems must work with varying degrees of autonomy. This means that they function independently of human intervention to a certain degree. The degree of autonomy can vary from manual operation to complete independence from human intervention. According to the European Commission, “human involvement” and “human intervention” are at the heart of the concept of autonomy.
If a system is designed in such a way that it is able to work completely independently without any human involvement or intervention, it is an autonomous (AI) system. Systems that only perform manually operated functions, on the other hand, should be excluded from the definition of an AI system. Human involvement can take place both directly and indirectly, for example through manual controls or automated, monitoring. In contrast, a system that independently generates output based on manually entered data is considered partially autonomous.
(3) Adaptability
The third element of the definition in Art. 3 para. 1 of the AI Regulation states that an AI system “may” exhibit adaptability after its introduction. Autonomy and adaptability are different but closely related concepts. Adaptability refers to the self-learning capabilities of a system that enable it to change its behavior during use. This allows the system to produce different results, even with the same inputs. The ability of a system to learn automatically and discover new patterns is an optional, but not a mandatory, prerequisite for classification as an AI system.
(4) Goals of the AI system
The fourth element of the definition refers to the goals of the AI system, which can be defined either explicitly or implicitly. Explicit goals are clearly formulated goals that are coded directly in the system by the developer, such as the optimization of a cost function or a probability. Implicit goals, on the other hand, result from the behavior of the system or the assumptions underlying the system and can be derived from the training data or the interaction of the AI system with its environment.
Recital 12 of the AI Regulation clarifies that the objectives of an AI system may differ from its intended purpose. The objectives of an AI system are internal to the system and relate to the tasks and their results that the system is intended to perform. The intended purpose, on the other hand, is external and includes the context in which the system is used and its operation. According to the European Commission, a virtual AI assistance system in a company can have the goal of answering user questions precisely, for example, while the intended purpose is to support a specific department in its tasks.
(5) Conclusions
The fifth - and probably most difficult to define - element of an AI system is that it must be able to deduce from the respective inputs how it can generate outputs. An essential feature of AI systems is therefore their ability to draw conclusions, which distinguishes them significantly from conventional software systems. Recital 12 of the AI Regulation expressly clarifies that AI systems must be different from conventional software systems and that this should not include systems that are based exclusively on rules defined by natural persons for the automated execution of processes.
Cases in which the classification as an AI system must be affirmed:
Techniques that enable reasoning and therefore fall under the definition of an AI system include machine learning approaches such as supervised, unsupervised, self-supervised and reinforcement learning.
- In supervised learning, the system is trained with labeled data in order to recognize patterns and classify new data.
Example: An AI-supported email spam detection system that is previously trained with labeled data (“spam” or “not spam”) in order to classify “new” emails accordingly. - In unsupervised learning, the system discovers patterns in previously unlabeled data in order to discover structures or relationships in data using various techniques (clustering, dimensionality reduction or anomaly detection).
Example: AI systems in pharmaceutical companies that are used for drug discovery. - Self-supervised learning is a subcategory of unsupervised learning in which the system uses unlabeled data in order to subsequently (then supervised) independently label it and learn from it.
Example: An image recognition system that learns to recognize objects by predicting missing pixels in an image. - Reinforcement learning is based on a “reward function” in which the system learns through trial and error and refines its strategy in the process.
Example: An AI-supported robotic arm that can perform tasks such as grasping objects. - Deep learning is a sub-area of machine learning that uses artificial neural networks (ANN) to learn features from raw data, whereby very large amounts of data are regularly required for training.
Example: Complex AI systems that are capable of recognizing and reproducing speech and communicating with natural persons, such as ChatGPT, Micro-soft Copilot, etc.
In addition to machine learning approaches, there are also logic- and knowledge-based approaches that are based on coded knowledge or symbolic representations. These systems learn from knowledge that has been encoded by human experts and draw logical conclusions using deductive or inductive methods. Examples include expert systems based on predefined rules and ontologies, and classical language processing models that use grammatical knowledge and logical semantics to extract meaning from texts.
Cases in which the classification as an AI system is to be denied:
Recital 12 of the AI Regulation clarifies that the definition of AI system should not include systems that are based exclusively on rules established by natural persons for the automatic execution of operations. Some systems that are able to draw conclusions do not fall under the definition of AI systems, as they are only able to analyze patterns and adapt their output independently to a limited extent.
Systems for improving mathematical optimization or for accelerating established optimization methods, such as linear or logistic regression, do not fall under the AI definition, as they only perform basic data processing. Examples are systems that use machine learning to improve optimization algorithms or physics-based systems that use machine learning to accelerate physical simulations.
Simple data processing systems that follow predefined instructions and do not use AI techniques such as machine learning or logic-based inference also do not fall under the AI definition. This includes database management systems that sort or filter data, as well as software for descriptive analysis, hypothesis testing and visualization.
Systems based on classic heuristics that find solutions to problems using rule-based approaches or pattern recognition also do not fall under the definition of AI. One example is a chess program that uses heuristic evaluation functions without learning from data. Simple prediction systems that operate through basic statistical learning rules, such as predicting stock prices through historical averages, are also not classified as AI systems. These systems provide basic predictions, but their performance does not match the complexity of modern machine learning models.
In summary, systems that only use basic data processing or fixed rules without data-driven “learning” are therefore not covered by the definition of AI systems under the AI Regulation.
(6) Generating specific outputs
The sixth element of the definition of an AI system states that the system must be able to produce outputs such as predictions, content, recommendations or decisions.
- Predictions are estimates of unknown values from known inputs and require the least human intervention. AI systems that use machine learning can uncover complex patterns in data and make accurate predictions in dynamic environments. Examples include self-driving cars that make real-time predictions and adapt decisions in complex traffic situations, or systems for estimating energy consumption based on data analysis.
- Content refers to the generation of new material such as text, images or videos, often based on machine learning models such as GPT technologies.
- Recommendations are suggestions for specific actions or products based on user preferences and behavioral patterns. AI-powered recommendation systems can process large amounts of data, adapt in real time and provide personalized recommendations, which in turn distinguishes them from static, rule-based systems.
- Decisions refer to automated conclusions or actions traditionally made by human judgment. An AI system that makes decisions automates this process and achieves results without human intervention.
To summarize, AI systems are differentiated from non-AI systems by their ability to process complex relationships and patterns in data and generate differentiated results such as predictions, content, recommendations and decisions. This ability enables them to “think” in a more differentiated way and to act in structured environments.
(7) Interaction with the environment
The seventh element of the definition of an AI system is that the results of the system influence the physical or virtual environment. This emphasizes that AI systems actively influence their environment instead of being purely passive. The influence of an AI system can affect tangible, physical objects such as a robotic arm, as well as virtual environments, including digital spaces, data streams and software ecosystems.
Practical advice
Even if some technical nuances remain open despite reading the guidelines, the European Commission's elaborations are very welcome. Since the question of whether a system is an AI system or not has a decisive impact on the applicability of the AI Regulation, companies must examine very carefully for each system whether the respective characteristics of the definition of an AI system are fulfilled. As this cannot be done in detail (solely) through a legal review, the IT managers must always be involved in the process. In our view, it is also advisable to draw up a detailed audit template that can be used to document the results of the audit accordingly.
In cases of doubt, it will probably be advisable to affirm the existence of an AI system. The legal risk is generally considered to be significantly greater if mandatory requirements are not implemented than in the exact opposite case. Of course, this only applies if the non-existence of an AI system cannot be justified with reasonable arguments from a technical point of view.
We will be happy to support you with a corresponding audit and the subsequent implementation of all requirements for legally compliant AI compliance!
Podcast "ILTA Voices" zum Barrierefreiheitsstärkungsgesetz
Is your firm or business ready for the European Accessibility Act (EAA)? In the latest podcast of the International Legal Technology Association (ILTA), Yves Heuser and Johannes Schäufele broke down this game-changing regulation, exploring who needed to comply, key deadlines, and how to meet accessibility standards. From practical steps to enforcement measures, we covered everything you needed to know to stay ahead of the 2025 requirements.
You can listen to the audio of the recording here.
SKW Schwarz: 18 Lawyers Recognized as "World's Leading Practitioners Germany 2025" in Six Categories
The prestigious Lexology Index (formerly Who's Who Legal) Germany 2025 has recognized 18 experts from SKW Schwarz as leading lawyers in Germany. These accolades span six key legal areas.
The honored experts are as follows:
Commercial Mediation:
- Dr. Alexander Steinbrecher
Data:
- Nikolaus Bertermann
- Oliver M. Bühr
- Dr. Matthias Nordmann
- Dr. Matthias Orthwein
- Dr. Andreas Peschel-Mehner
- Stefan C. Schicker
- Prof. Dr. Mathias Schwarz
- Martin Schweinoch
IP-Trademarks:
- Dr. Dorothee Altenburg
- Dr. Magnus Hirsch
- Margret Knitter
- Sandra Sophia Redeker
Life Sciences:
- Dr. Matthias Nordmann
- Dr. Tatjana Schroeder
- Markus von Fuchs
Product Liability Defence:
- Arndt Tetzlaff
Sports & Entertainment:
- Dr. Johann Heyde
- Götz Schneider-Rothhaar
- Prof. Dr. Mathias Schwarz
The inclusion of these lawyers underscores SKW Schwarz's leading position in Germany. The Lexology Index is considered one of the foremost international accolades in the legal market, based on comprehensive analyses and evaluations by clients and industry experts.