Liability issues in the use of AI based decision-making systems

English

Newsletter

Published 01 April 2020
News image

The use of automated decision-making systems based on artificial intelligence (“AI”) has revolutionized many businesses. AI is now frequent in everything from self-driving cars to the development of new drugs and vaccines. In some cases, they only act as decision support, e.g. in connection with the diagnosis of diseases or in the recruitment of new staff. The introduction of AI in ordinary businesses raises new questions about accountability in the event of errors or deviances in the automated decision by an AI system, leading to losses. This newsletter addresses the different stakeholders that may be subject to accountability and how the liability between them can be proportioned. The newsletter does not aim to be exhaustive but should serve more as an eye-opener for the liability issues that arises with the use of AI systems.

1. The Stakeholders

Who, then, are the various stakeholders who may be held accountable? We have chosen to divide them into three main categories; first comes the developers of the AI systems and the underlying algorithms (the “Developer”), then comes the deployers or purchasers (the “Deployer”) of the systems as normally also will be responsible for the machine learning process and last comes the users or the individual end users (the “Users”). This is of course a simplified division of accountable stakeholders. In some cases, a Developer may have a greater responsibility, for example, if the development of an AI system takes place within a business that provides ready-made systems. Let's say that a medical device company provides a ready-to-use cancer diagnostic tool, in such a case the Developer and the Deployer would be the same person and therefore with a higher degree of accountability. In other contexts, a Deployer may be the same person who is a User of the AI system, this can be a business that has ordered a tailormade AI system for its own use. An additional and completely different situation would be when the AI tool on its own continues to learn in connection with its use, for example a self-driving car learns and remembers road sections and traffic situations at different times of the day and adjusts to this new data by choosing other roads or speed etc. We also note there are third party providers of datasets for machine learning that will have an important role in the chain of accountable parties and to make it even more complex, there is open data on the internet available for use by the AI systems own machine learning. In these contexts, the matter of accountability becomes even more complex and difficult to assess.

In this newsletter, we will therefore take into account the three main categories of stakeholders and how the liability can come to be distributed between them.

2. The Developers

What role does a Developer play from an accountability perspective? For instance, a Developer may be held accountable for ensuring that an algorithm is working properly in accordance with a functionality specification provided by a Deployer but also for the safety processing of datasets provided in connection with the machine learning process. When the Developer acts as an independent party in relation to a Deployer who has ordered an AI system, most of the liability issues will be regulated in a development agreement between the parties. In doing so, the development agreements will be linked to an assurance by the Developer to provide a system that is and functions in accordance with the Deployer’s requests for functionality normally specified in the development agreement. To the extent that the system meets the agreed specifications on delivery, liability for errors should be limited as they normally are regulated in the development agreement. In such a scenario, the liability of a Developer does not deviate much from the liability of a developer of a standard non-AI based data system. However, with an AI algorithm it will not be too easy to ensure full compliance with a functional specification at delivery or in connection with acceptance test. Acceptance tests will normally be based on a limited amount of data added. Problems and errors can occur long after the AI system is deployed in operational use. Coming back to the situation where the machine learning takes place after acceptance by the Deployer feeding the system with sets of data or when it continues in connection with the use of the system, errors may be detected only when it is too late, and when a damage has occurred. In such cases, it may be questioned whether the AI system actually met the functional specification at the acceptance test or on delivery. This will place new demands on contract writers who must take into account that an AI system changes over time. How far does it go to extend the responsibility of the Developer? To the extent that the agreement can capture errors resulting from changes that occur after commissioning the system and to the extent that the reason for an error is not due to gross negligence or intent on the part of the Developer, the limitations of liability in a development agreement should remain in place.

There are also discussions in the legal debate on a broader responsibility for the Developers, especially on the basis of the functions of the algorithms. A responsibility for ensuring that algorithms, based on ethical and moral aspects, must not contribute to harm the society. In these respects, it is important that an algorithm does not result in decisions that may be perceived as discriminatory or unfair to certain social groups. Other issues under discussion include developers' responsibility for, for example, environmental aspects where, for example, an AI based system for the use of electricity networks can contribute to increased energy consumption when there is access to cheap electricity, other aspects may be the developers' involvement in the development of lethal autonomous weapon systems. It remains to be seen how liability can be claimed in such situations and by whom.

3. The Deployers

In order for an AI system to function properly, the system must receive sufficient information to deliver correct decisions. Here it should be noted that according to IBM, about 20 million images of different dogs are required for an AI system to properly distinguish a dog from other four-legged creatures. Taking this into account, it is not difficult to understand that there is a great responsibility on the stakeholder responsible for the machine learning. In my previous example above, which concerned a cancer diagnostic system, it is easy to understand that there may not be enough with 20 million images to ensure that an AI system can identify the difference between a benign and a malicious tumor.

A Deployer who deploys AI services or brings to the market AI products may be accountable for using the AI system to provide legitimate and lawful services. A Deployer who feeds the AI system with data sets in the machine learning process may be held accountable for the correctness of the data fed into the AI system.

An example could be a car manufacturer who, in their vehicles, has AI systems installed for Advance Driver Assistance System (ADAS) and Autonomous Drive (AD) applications. If, in the machine learning process, the car manufacturer refers to the wrong traffic rules for the use of the vehicle in a particular country, or does not input sufficient data so that the system can distinguish between different traffic signs, there is a good reason to believe that the car manufacturer will be liable for damage caused by the vehicle not behaving in an expected lawful manner.

4. The Users

Users should have an obligation to use AI in a manner that does not harm others. With the previous example of the cancer diagnostic tool, a doctor that uses the AI system to support his decision for the treatment of the patient must not rely solely on the recommendation that the AI system delivers. The doctor must act according to best practice and is therefore responsible for the decision-making on suitable treatment. The same applies to self-driving cars, as a driver must always be prepared to intervene when dangerous situations arises.

5. Legal development

In the legislative process, many different projects are under way to find appropriate solutions to different liability issues. There are already areas where regulations are prepared, for example in road traffic where the harmonization of traffic signs is in force. Other areas of legislation in some countries, i.e. Germany and the UK, concern the responsibility for self-driving cars. There are areas where traditional legislation already applies, such as in the field of product safety and liability etc.

Major work is underway both in the EU and in the US, which, similarly as in this article, seeks to identify the problem areas and whether there is a possibility primarily for the concerned stakeholders to self-regulate their activities. A number of policy documents have been produced which addresses, among other things, liability issues, but also other policy issues that should serve as a guidance to the stakeholders pending final regulatory regulation.