view all news & events

04.04.2024

Publication by the state media authorities on the use of AI in the media

Just a few days after the adoption of the AI Act by the European Parliament (see here), the state media authorities have published a paper that looks at the use of AI in the media. The state media authorities, which monitor compliance with media regulatory requirements in Germany, take position on the challenges and solutions for dealing with AI. The full text of the paper is available here.

Protecting diversity of opinion in the use of AI systems

One focus of the paper is the protection of diversity of opinion in the use of AI systems. This aspect is largely ignored by the AI Act so far. According to the AI Act, the effects of the system on the diversity of opinion, whether and how it checks the information used for its truthfulness and whether the system compares opinions found with other opinions are irrelevant for the risk assessment of an AI system. The state media authorities therefore rightly state that the AI Act raises new questions rather than providing answers, at least in the area of media law.

"Strengthen diversity, regulate responsibility, maintain trust"

The paper is divided into three key points ("Strengthen diversity, regulate responsibility, maintain trust"). When using AI for the purpose of generating or disseminating audiovisual or textual content, it should be worked towards and made transparent how the dangers of narrowing diversity can be counteracted and how the broadest possible, diverse illumination can be guaranteed instead of a narrow perspective and definition. Even when using AI systems, providers must remain liable for the content they create and remain committed to their journalistic duty of care.

In order to maintain media trust and to see the development and use of AI as an opportunity, it seems sensible to apply the media law principles of transparency and non-discrimination to the use of AI. Discrimination in accordance with the Interstate Media Treaty would then occur, for example, if an audiovisual or textual offering favors material with a certain political orientation without objective reason. According to the state media authorities, self-imposed labeling of AI-generated content in accordance with defined guidelines would also be appropriate. In addition, transparency and disclosure obligations regarding the origin of data and training methods of relevant AI systems, among other things, should be included in the catalog of diversity-protecting information in the Interstate Media Treaty. This is likely to present media providers with the challenge of subsequently checking existing databases and already trained AI systems for compliance with diversity standards. This in turn presupposes that providers of AI systems grant the media provider access to the content of their database in accordance with the requirements of the AI Act.

Outlook

The state media authorities will play a key role in regulating and dealing with AI systems in the future. Accordingly, the state media authorities also see it as their responsibility to implement their demands and solutions. The published paper can be seen as the foundations of a new strategy of the state media authorities on the regulation and handling of AI systems to protect diversity of opinion in the media.

This article was written with the kind assistance of Sten Rohmann.

Authors

Dr. Christian Schepers

Associate

visit profile