view all news & events

29.08.2023

Large Language Models in Press and Publishing - Legal Challenges of AI

Since late 2022, artificial intelligence based Large LanguageModels (LLM) like ChatGPT have been virulant. Some see them as a breakthrough innovation that opens up completely unimagined opportunities. Others warn of significant risks and unpredictable impacts on the world as we know it.

The chatbot ChatGPT has already meant a small revolution, at least for language-based LLM - and thus also has effects that publishers, authors and editors are already noticing. Language-based LLM are bringing considerable changes in their day-to-day work and thus many new legal questions that need to be answered.

For example, it is particularly relevant for authors and editors whether they are allowed to use LLM in their daily work. The further development of programs such as ChatGPT results in a need for contractual regulation, so that it is clear in which way and with which tools editorial content may be created.

This raises a number of questions: May LLM help with research? May data and research results be structured or summarized with the help of LLM? Or may LLM even create editorial content themselves? These and other questions must be clarified contractually in order to create legal certainty.

Also with regard to implications under competition law, the question might arise as to whether or not publishers must include a transparency notice about the creation by means of AI, and if so, for which AI-generated editorial content.

From a copyright perspective, it is of course relevant for publishers as well as for authors and editors whether AI-created editorial content is protectable. According to a preliminary assessment, the answer to this question is no. Only humans, i.e. natural persons, can be creators of a work in the sense of German copyright law. However, the situation is likely to be somewhat different for editorial content in which AI is only used as support. For example, as long as a LLM is only doing research or pre-analysis of data, authors and editors can still be creators of the editorial content they write.

When using LLM in connection with confidential research results or other personal data, however, data protection requirements must be observed and, at best, this must also be ensured by a compliance system. Liability issues in the event of any legal violations must also be taken into account - both vis-à-vis third parties and, if applicable, vis-à-vis the provider of an LLM itself.

For many publishers, authors and editors, it is also interesting how they can protect their own editorial content that is publicly available online from being used by an LLM. Copyright law provides for a so-called data mining reservation. This reservation of rights prohibits the automated analysis of digital works. Many publishers already use this tool to protect their content. For the reservation to be legally valid, it must be declared on the website in machine-readable form. However, it is almost impossible to verify whether this reservation is actually observed.

To conclude - while many industries can still wait for further developments in the field of AI, publishers, authors and editors should already inform themselves and obtain legal advice in order to react to the changes that LLM bring. This is the only way to ensure that the interaction between creativity and technology takes place within the framework that creators and exploiters desire.

Authors

Anna Kellner

Dr. Anna Kellner

Associate

visit profile