In today's AI Flash, questions relating to the attribution of knowledge and resulting legal responsibility when using AI tools in customer contact will be discussed with a particular focus on the liability of platform operators. The main focus is on the standard of knowledge attribution, the requirements for the suitability of the AI tool and the legal responsibility for the different forms of use of AI tools.
The basic problem is well illustrated by a recent decision of the Munich Regional Court I:
On 27 January 2025, the 33rd Civil Chamber of the Munich Regional Court I issued a preliminary injunction against TikTok (docket no. 33 O 28/25). The court prohibited the publication or public accessibility of a so-called “fake account”. What does not seem particularly noteworthy at first glance reveals exciting legal issues surrounding the use of AI chatbots and AI tools and the attribution of their “knowledge” from customer contact to the platform operator on closer inspection.
In this case, the applicant discovered a fake account created by an unknown person on the platform. The fake account had a confusingly similar name and used the profile picture of the original account. Some videos from the original account were also used. With this fake account, its operators fraudulently tried to persuade users indiscriminately to carry out transactions with cryptocurrencies via private messages.
After an attempt to communicate directly with the operators of the fake account, the applicant used the respondent's reporting form. After the form had been filled out and submitted, the applicant received the automatically generated message “Reviewing your report - We will review the report and take appropriate action if there is a violation of our Community Guidelines”. After some time, the applicant later received a message stating that TikTok did not believe there had been an infringement. A second report via the reporting form ended in the same way. Following TikTok's refusal to delete the fake account, the applicant took legal action against TikTok by filing the application for a preliminary injunction.
In the aforementioned legal dispute, TikTok argued that it uses an AI tool to communicate with users and to deal with reports of violations of the community guidelines by users of the platform. TikTok itself therefore claimed not to have had any knowledge of the existence of a fake account, which would justify a claim for injunctive relief pursuant to Section 1004 I BGB. The main reason for this was that the complainant allegedly had not sufficiently substantiated a breach of the platform’s guidelines in the reporting form. TikTok argued that the requirements for the notice-and-takedown mechanism of the Digital Services Act were therefore not met.
The Munich Regional Court I ruled in favour of the applicant and issued the requested preliminary injunction. It held that the applicant was entitled to injunctive relief under Sections 1004, 823 (1) of the German Civil Code in conjunction with general rights of personality and Sections 22, 23 of the Art Copyright Act as well as Section 19a of the Copyright Act. In particular, according to the Regional Court, TikTok could not rely on an alleged lack of knowledge of the infringements. Instead, the court held TikTok legally responsible for the negative outcome of its examination of the report, which it had carried out according to follow-up communication with the applicant. The Regional Court considered TikTok's further arguments to be contradictory and therefore irrelevant.
Applied standards of knowledge attribution in the context of notice and takedown
As a so-called “indirect disturber” (a legal term regularly used in German case law in this context), the platform operator is only responsible for the infringement and thus may be held legally responsible for the asserted injunctive relief once it has gained knowledge of the facts giving rise to the claim. Although the right to injunctive relief (e.g. under name rights according to Section 12 of the German Civil codes and analogous application of Section 1004 (1) 2 in conjunction with Section 823 (1) of the German Civil Code in further conjunction with general rights of personality under Article 1 (1), (2) 1 of the Basic Law. Section 22 of the Art Copyright Act or Article 6 (1) of Regulation (EU) 2022/2065) also exists independently of the knowledge of the platform operator, the platform operator generally only becomes the responsible party and thus the correct opponent of the claim once it has gained knowledge. Injunctive relief can also result, depending on the facts of the case, from competition law (Section 8 (1) of the Act against Unfair Competition) or from the infringement of property rights such as trademarks, patents, copyrights, etc.
The reason for this is that the platform operator cannot be expected to be sufficiently informed about every piece of content, given the almost infinite amount of data uploaded and shared by users on such platforms. In order to establish the platform operator's responsibility for liability for removal and omission, sufficiently substantiated and specific information must be provided by the person concerned.
An attribution of “knowledge” of AI tools applied by the platform operator to the detriment of its users is probably best based on the legal standards set forth in Section 166 (1) of the German Civil Code. These standards are also applied when assessing knowledge or the need to know about the behaviour of third parties. This is particularly the case since the distribution of risk when using AI tools in customer contact is comparable to the use of employees or third parties bound by instructions. According to the legal concept of Section 166 (1) of the German Civil Code, those who utilise third parties in legal transactions can be held liable for their conduct. By using employees, the principal can extend his economic possibilities in market transactions. In return, however, he must also bear the risk of errors or knowledge on the part of employees. He must therefore organise his business internally in such a way that knowledge and instructions circulate sufficiently. Otherwise, the principal could avoid legal responsibility by using straw men. It is only consequent to apply the same standards if AI tools are used (instead of employees).
A Canadian district court took a similar view in Moffatt v. Air Canada (2024 BCCRT 149). In this case, a customer sued the airline Air Canada because the AI chatbot on the airline's website gave him incorrect information about the cancellation options for his flight and he spent almost CAD 800 in vain as a result. Air Canada tried to defend itself by arguing that the AI chatbot used was a separate entity and therefore responsible for its own behavior. This argument did not convince the Canadian judge. In his view, it made no difference whether the information came from a static website or an AI chatbot integrated into the website. In both cases, Air Canada as the operator was responsible for the content of the website.
Requirements for the suitability of an AI tool
But what are the requirements for the suitability of the AI tool? As is so often the case in law: it depends. The suitability of an AI tool must be measured in each individual case according to the area of application and the distribution of tasks.
This question was expressly left open in the above-mentioned ruling of the Munich Regional Court I, as the platform operator's messages following the notification of the possible infringement referred to an alleged “completed review” which had not revealed any infringement. Therefore, in the court's view, the platform could no longer “hide” behind a claimed lack of substantiation of the complaint.
However, the court decisionshows that the AI tools used are in any case suitable for accepting complaints and similar messages if they offer the user real added value and do not merely return prefabricated text templates without closer examination. In concrete terms, this means that hallucinations of the AI system that lead to incorrect decisions due to an inadequate data basis must be ruled out. The system must be programmed and trained in such a way that, in case of doubt, a subsequent request for data on the infringement is made instead of rejecting the user enquiry without a more in-depth check. The fewer incorrect and/or hallucinated decisions the AI system makes, the more suitable it is for this purpose.
The requirement of a possibility of substantiation for the user is also reflected in a decision of the Higher Regional Court of Frankfurt am Main (docket no.: 16 U 195/22). In this judgment the court ruled that the platform operator can only be held liable if the complaints of the person concerned - regardless of whether they are actually true or false - are formulated in such concrete terms that a legal offence can be easily assessed on the basis of the allegations of the person concerned. This also makes sense in light of the notice-and-takedown system.
Summary
The question of knowledge attribution at the expense of the user of AI tools is ultimately “old wine in new bottles”. The developed and existing legal principles on liability when using employees can also be applied to the use of AI tools as “virtual employees”, taking into account the special features of AI. The user bears the risk of the use of AI systems, not least due to the lack of transparency in the decision-making process for technical reasons. In short: Operators are liable for their AI. Whether the operator can then take recourse against the provider of the AI tool depends above all on whether the AI tool is designed to be functionally reliable, sufficiently well programmed and sufficiently error-resistant - and which liability provisions have been agreed in the contract with the provider of the AI tool.