Recommendations on A.I. liability

The forthcoming AI Act will regulate high-risk AI. But ICCL fears its scope is too narrow to protect our rights.

1 February 2022 ICCLs Dr Kris Shrishak has answered the European Commission's request for views on how liability rules should be adapted for AI, in a letter on 7 January 2022.

As he notes, ICCL is not convinced that product liability in the correct framework to address the hazards of AI. That notwithstanding, his submission presents several observations about the liability rules being proposed.

See ICCLs related work on artificial intelligence.

Image

7 January 2022 

Submission to the consultation on adapting the liability rules to the digital age and artificial intelligence

Dear Colleagues,

  1. Irish Council for Civil Liberties (ICCL) is Ireland’s oldest independent human rights monitoring organisation. We welcome the opportunity to contribute to the consultation on improving Product Liability Directive (PLD)[1] for the digital and circular economy and to address liability rules for artificial intelligence (AI) systems. 

  2. We are not convinced that product liability is the correct framework to address the hazards of AI. The fundamental rights at hazard, cited in Section 3.5 of the Commission’s explanatory memorandum of AI Act,[2] go beyond consumer protection. We reserve our view on that question, however, for a separate discussion. That notwithstanding, we present the following observations. 

  3. Product liability for AI should ensure that fundamental rights of natural persons are protected and, when natural persons are harmed, they can access compensation as easily as possible. In the case of AI systems and other complex technologies, the burden of proof should not be on the injured person. 

  4. We have four specific recommendations: 

    1. The definition of 'product' in Article 2 of PLD should be updated to include software.
    2. In addition, Article 6 (1) (c) be struck off to reflect that software can be updated, which means that the time when a software is first put into circulation is irrelevant.
    3. ‘Development risk defence’ in PLD should be removed for all AI systems, to make it easy for the injured person to receive compensation.
    4. The burden of proof should be on the defendant when harm is caused by AI systems. 

    Below, we elaborate on each recommendation in turn.

Update the definition of product in the Product Liability Directive

  1. The definition of a product should be updated. Article 2 of PLD defines a product as follows: 

    For the purpose of this Directive 'product' means all movables, with the exception of primary agricultural products and game, even though incorporated into another movable or into an immovable. 'Primary agricultural products' means the products of the soil, of stock-farming and of fisheries, excluding products which have undergone initial processing. 'Product' includes electricity.

    This should be updated to clearly state that software is a product.

Remove the reference to the time when software is first put into circulation

  1. Article 6 (1) of PLD is problematic. It states: 

    A product is defective when it does not provide the safety which a person is entitled to expect, taking all circumstances into account, including:
    (a) the presentation of the product;
    (b) the use to which it could reasonably be expected that the product would be put;
    (c) the time when the product was put into circulation. 

  1. This should not apply to software. "The time when the product was put into circulation" is irrelevant for software, because it can be regularly updated after that time by the producer/provider.[3] For this reason, Directive 2019/770 (Supply of Digital Content and Digital Services) included software updates as a requirement for conformity.[4] This change to Article 6 (1) is particularly relevant for AI systems, because not only can the software code be updated but also the AI model can be updated after the product has been placed in the market. 

  2. We are concerned that even this updated definition may not adequately capture what a defect could mean in the context of AI systems. For example, the following questions trouble us. What happens if the AI system performs in a manner not foreseen by the provider? Conversely, what happens if the harm is caused by the self-learning and autonomous behaviour of an AI system? These AI systems may function well when placed in the market but may later malfunction while in use.

Reduce obstacles to compensation

  1. ‘Development risk defence’ should be removed for all AI systems so that producers remain strictly liable for damage. 

  2. Under Article 7(e) of PLD, ‘development risk defence’ exempts producers from liability if they can prove "that the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered." 

  3. This exemption is inappropriate for AI systems for the following reasons: 

    a) In addition, the producers should be held liable if they do not update software when new information about a defect is known, in scientific literature or elsewhere. Whether the scientific and technical knowledge was available when the software was initially placed on the market should not be considered.

    b) PLD assumes “material objects placed on the market by a one-time action of the producer, after which the producer does not maintain control over the product.”[5] As we note above, and as Directive 2019/770 recognises, this is not applicable to software that can be updated.

    c) ‘Development risk defence’ creates a perverse incentive for producers to deploy products without adequate safety research, or to delay safety related research and/or the publication of research results until the product is placed on the market.

    The problem of perverse incentive is critical to AI systems. Since the data and computing infrastructure required to research the safety of AI systems is concentrated in the industry,[6]it is essential that it be correctly incentivised.

Reversing the burden of proof

  1. The burden of proof should be on the defendant when harm is caused by AI systems. 

  2. For natural persons harmed by AI systems, gathering evidence is disproportionately difficult . There is little transparency about how AI systems operate, or how and whether they are used by companies and public bodies. Indeed, it can be difficult for a person to know whether a private company or a public authority is using AI systems. For example, without making an access request under Regulation 2016/679 (GDPR), one may not know whether their claim for welfare benefit was evaluated and unfairly dismissed by an AI system. Even lawmakers have struggled to find out what algorithms and AI systems are being used by public services.[7] One must first know an AI system is in use in order to make a claim. How do we expect the rest of the public to find out?[8] 

  3. Information asymmetry between the producers of AI systems and the public is significant. AI systems, and the different techniques and the trade-offs between them, are unlikely to be understood by the public. Even if they were, updates and self-learning may make it impossible for people to remain so. It is therefore the producer, and not the injured party, who has the means to gather evidence. 

  4. My colleagues at ICCL and I are eager to support you in your deliberations on these matters, and are at your disposal.

On behalf of ICCL,

Image

Dr Kris Shrishak
Technology Fellow, ICCL

Download letter (PDF)

Notes

[1] Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products

[2] Proposal for a Regulation of the European parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts, 21 April 2021

[3] We use the terms provider and producer interchangeably. We use the term 'provider' as defined in Article 3 (2) of Proposal for a Regulation of the European parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts, 21 April 2021

[4] Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services, Article 8 (2): "The trader shall ensure that the consumer is informed of and supplied with updates, including security updates, that are necessary to keep the digital content or digital service in conformity…"

[5] Expert Group on Liability and New Technologies New Technologies Formation. Liability for artificial intelligence and other emerging technologies. 2019. p. 28. URL: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=63199

[6] Murgia, M. AI academics under pressure to do commercial research. Financial Times. Mar. 13, 2019. URL: https://www.ft.com/content/94e86cd0-44b6-11e9-a965-23d669740bfb

[7] Feathers, Todd. “Why It’s so Hard to Regulate Algorithms.” TheMarkup. URL: https://themarkup.org/news/2022/01/04/why-its-so-hard-to-regulate-algorithms

[8] Article 60 of AI Act provides for a publicly accessible database. The Commission's draft allows for limited transparency. It can be improved by obliging providers to submit information on all the users who use the AI system so that the public can access this information.