view all news & events

21.08.2023

Artificial intelligence and liability: Who must be held accountable when AI causes damage?

No matter where you look at the moment: We are confronted with artificial intelligence, or rather "AI", almost daily. Since ChatGPT at the latest, it has become clear that AI will sooner or later make a major contribution to society. AI is already being used in many branches of business because it can support people in their everyday lives and jobs and optimise work processes. But what actually happens if the AI exceptionally makes a wrong decision and causes damage as a result? In such cases, is the AI "to blame" or must the errors of the AI be attributed to a human being, who in turn is held liable?

The following example illustrates this: Imagine an autonomous vehicle driving on the road. Suddenly the traffic lights turn red and a pedestrian runs across the road. The AI in the vehicle now erroneously decides to continue driving. What happens next is clear. There is an accident and the pedestrian is seriously injured. In these cases, the following questions arise: Who can be held liable if an accident occurs due to an autonomous vehicle? Does the manufacturer of the vehicle have to be held liable or rather the driver or perhaps the owner? Another example: A company uses an AI programme that checks applications from applicants and makes a certain selection. The AI now decides incorrectly to shortlist only male applicants and no female applicants. Or it decides only for applicants without a migration background. Again, the same question arises: in these cases, should the company using the AI be held liable, or rather the manufacturer of the AI software, if certain applicants are rejected because of this decision? Next example: What actually happens if an AI-supported diagnosis in the healthcare sector is faulty and this leads to faulty treatment by medical staff? Against whom can the patient assert liability claims? Against the doctor, against the hospital or rather against the AI manufacturer?

These examples are only a few of many that can occur in practice. One thing is certain: if a liability case arises, it can become very complex. So what should manufacturers and users of AI systems look out for in order to avoid liability claims? The following article is intended to provide an insight into the liability issues in connection with AI.

Liability of manufacturers

So how can an injured party take action against the AI manufacturer? This requires a look at the possible legal bases. In particular, the following options are available to injured parties:

  • Contractual liability
  • Producer liability
  • Product liability

In detail:

1. Contractual liability

Imagine that an AI manufacturer offers an AI software to a hospital and contractually assures the hospital that the AI will be able to make a diagnosis within 12 hours. Now, however, the case arises that the AI only provides a result after 12 hours have elapsed. In this case, the hospital can take action against the AI manufacturer under the contract, since the hospital was contractually assured of this feature.

Practical advice:

Companies should urgently pay attention to the agreed scope of services during contract negotiations. As a general rule, the less services AI manufacturers promise, the lower their liability risk. For this reason, careful thought should be given in advance to the specific obligations to be included in the contract.

2. Producer liability

In addition, injured parties have the possibility to claim against manufacturers within the framework of producer liability. Producer's liability always comes into consideration if, for example, life, body or property are injured after the product has been put on the market. The central prerequisite for this claim is the violation of a duty to ensure safety. Why this is the case is obvious: The person who creates a source of danger or places an obstacle on the market is obliged to take appropriate measures to ensure the safety of other persons. If we take the above example, for example, the AI manufacturer is obliged to ensure that no dangers emanate from the autonomous vehicles it has developed. However, how high the requirements for the duty to ensure road safety actually are depends on the individual case. In the case of an autonomous vehicle, the requirements are naturally very high. However, the situation is different for an AI that is used in the maintenance industry. The following traffic safety obligations are recognised in this context:

  • Design defect
  • Manufacturing defects
  • Organisational errors
  • Instruction errors
  • Product observation errors

Practical advice:

In order not to run the risk of a breach of duty of care and thus not to trigger liability claims, it is imperative that manufacturers draw up a carefully planned quality management system and implement it in their business.

3. Product liability

In addition, injured parties can bring claims against manufacturers under product liability law. According to section 1 para 1 cl. 1 ProdHaftG, such liability always comes into consideration if a defective product kills a person, injures a person's body or health or damages an object. The special feature compared to producer liability is that the producer is liable regardless of fault if he puts a defective product on the market.

Liability of users

AI users can also be held liable by injured parties, whereby liability claims can arise in particular from:

  • Contractual liability
  • strict liability
  • professional liability
  • Liability for discrimination
  • Liability under data protection law

For some of these liability claims, see in detail:

1. Strict liability

On the one hand, injured parties have the possibility to take action against the user on the basis of dangerousness. Strict liability means that the person who creates a source of danger must in principle also be liable for the resulting damage regardless of fault. Let us again take the example of the autonomous vehicle mentioned above. If an accident occurs due to the faulty decision of the AI, the injured party basically has the possibility to take action against the owner under section 7 para 1 StVG.

2. Professional liability

In addition, claims by injured parties against professionals must also be considered. If the patient is advised incorrectly by medical staff due to faulty AI software and a treatment error occurs as a result, injured patients may be able to take action against the doctor under professional liability law. For example, there is the possibility of asserting claims for damages against the doctor under section 280 para 1 BGB in conjunction with section 630 a BGB. In addition, it is also possible to proceed specifically - i.e. depending on the concrete profession in question - from the respective regulations. In the case of medical malpractice, for example, claims under the German Medical Liability Act (Arzthaftungsgesetz) may be considered.

3. Liability for discrimination

Furthermore, in the case of discrimination, injured parties can in particular assert claims for injunctive relief and damages. One can think of section 21 AGG, for example. If, for example, it turns out that a company uses an AI which evaluates applicants for vacancies and this AI leads to discrimination, rejected applicants can assert claims under the General Act on Equal Treatment (AGG).

4. Liability under data protection law

If data protection violations occur as a result of the use of AI, data controllers may also be subject to heavy fines. 

Outlook for the European Agenda

The above explanations show that there are currently quite extensive legal liability regulations at the national level. At the same time, the AI Act is being discussed at the European level, which is intended to create a uniform legal framework for AI in the EU in order to ensure the responsible use of AI (see also our landing page at SKW Schwarz: Artificial Intelligence Act) Furthermore, the creation of a new EU directive on AI liability is also being discussed (see also our landing page SKW Schwarz: AI Liability Directive).

Summary

The many advantages of AI are one of the main reasons for many companies to use it in their business or even to produce AI products themselves. Either way, there is no getting around the topic of AI. If you are also considering integrating AI into your company in the future or have already used it in your company or even develop and produce AI yourself: We are at your side with our professional expertise and will be happy to advise you when it comes to minimising liability claims in connection with AI. Please feel free to contact us.

Authors

Fabian Bauer

Fabian Bauer

Counsel

visit profile
Marius Drabiniok

Marius Drabiniok

Associate

visit profile
Oliver Hornung

Dr. Oliver Hornung

Partner

visit profile
Marwah Kamal

Marwah Kamal

Associate

visit profile