The time has finally come: after several radical revisions, the European Parliament adopted the AI Regulation (the ‘AI Act’) on 13 March 2024.1 This groundbreaking Regulation launches an elaborate regulatory framework for AI systems, especially for providers, but also for users. As a result of the increasing integration of AI in software, the AI Act will cause practically every organization to qualify as a user of an AI system. In this article we discuss the points to be paid attention to in contracts for the procurement of (high-risk) AI systems.
Obligations under the AI Act
The AI Act follows a risk-based approach, in which the obligations of providers and users (referred to as ‘deployers’ in the Act) depend on the risk of an AI system. The bulk of the AI Act’s obligations applies only to providers of ‘high-risk’ AI systems, such as tools used for biometric identification, for the allocation of benefits, or for the selection of CVs in application procedures.2 Limited-risk AI systems (such as chatbots), generative AI (such as ChatGPT), and generators of content like images and videos (such as Midjourney and DALL-E) are subject to limited obligations, with a focus on transparency. The obligations applying to these systems are not definite yet, but will be elaborated over time, for example in codes of practice still to be published. Because of their broad deployability, General Purpose AI (‘GPAI’) models – including the large language model GPT 4 – are subjected to a specific regime that includes a technical documentation, the training of the models, and a copyright compliance policy, while models with ‘systemic risks’ have even more obligations to comply with.
Contractual safeguarding
When AI systems are procured, the ability of the buyer or user to fulfil its obligations under the AI Act must be safeguarded. As observed above, obligations apply mainly to providers of high-risk AI systems, but users too have limited obligations, which are often passed on to them. Users are obliged, for example, to store log files, to notify the operator of the AI system of any incidents, to ensure that entry data are relevant, and to implement human oversight and certain duties to report and to register.3 In addition, a buyer will want to agree with a provider that the provider meets its obligations under the AI Act.4 When high-risk AI is being procured, the purchase agreement must contain at least the following obligations of the provider:
- Implementing a risk management system that estimates and evaluates the risks the AI system can pose to health, safety or fundamental rights, followed by the taking of risk management measures.
- Ensuring appropriate data governance for the data sets that are used to train and test the AI system;
- Having available technical documentation and manuals for the user;
- Logging (or facilitating the logging of) the use of the system and any problems occurring in this process;
- Ensuring that the operation of the AI system is sufficiently transparent to enable users to interpret the system’s output and use it appropriately;
- Making it possible for natural persons to oversee the functioning of the AI system (‘human oversight’);
- Guaranteeing that the AI system achieves an appropriate level of accuracy, robustness, and cybersecurity;
- Performing preventive and corrective maintenance on the AI system, also if the AI system generates output that is undesirable or harmful;
- Procuring a valid EU Declaration of Conformity.
For high-risk AI systems these obligations follow from the AI Act, and we expect these obligations to find their way to agreements, in smaller or greater detail. When procuring other AI systems, the obligations from the AI Act may also serve as an inspiration for contractual arrangements, since these arrangements are objectively obvious and desirable for users. However, time will tell to what extent providers of limited-risk or minimal-risk systems will want to assume such extensive obligations. Standard practices for the contracting of AI systems will have to evolve in the coming period.
Unrelated to the AI Act, there are other topics of attention that must be included in agreements for the procurement of AI, such as:
- Further processing of the data entered by the user; prompts or generated output used;
- Liability schemes for (undesirable) output of the AI system;
- Arrangements about the training data that were used for the AI model, including an appropriate arrangement for the intellectual property rights involved therein, and indemnities against third-party claims for infringement of intellectual property rights;
- Audit rights for testing the other party's compliance against the obligations of the AI Act and/or the agreement;
- Etc.
In any case, it is important that the contractual arrangement matches the concrete situation, having regard for at least the specific characteristics of the AI system, the relationship between the parties, any identified risks of the AI system, and the obligations to which the parties are subject under the AI Act.
Continuation
Although most obligations from the AI Act will only enter into effect in about two years, it is wise to already identify the impact of the AI Act on your organization and the contractual safeguards necessary. You can take the following concrete steps already today:
- Identify which AI systems are in use or will be put into use, and for which purpose.
- Classify the identified AI systems and assess which obligations from the AI Act your organization is subject to for these systems.
- Incorporate appropriate contractual arrangements based on the obligations in place into existing and new contracts, and implement the obligations in your organization.
- Update/repeat this process; the development of AI systems and the associated regulations (as well as their interpretation) are subject to constant change. Keeping abreast of the latest developments is very important.
____________________________________________________________________________________________________
Footnotes
(1) The adopted text is available here: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html.
(2) Annex III with the AI Act sets out a list of high-risk AI systems.
(3) On top of this, additional obligations apply in some cases. For example, a Fundamental Rights Impact Assessment (“FRIA”) will have to be performed in specific cases (see Article 27 AI Act). Besides, if a user of an AI system makes modifications to the AI system or its intended purpose, the user may qualify as a provider under the AI Act and become subject to the obligations associated with the provider role (See Article 25 AI Act).
(4) The requirements for high-risk AI systems are set out in Chapter 3 and the specific obligations of providers are set out in Section 3 thereof.