Introduction
It cannot have escaped anybody’s notice that ChatGPT, the tool that delivers output based on questions of users, has quickly gone viral for its user-friendliness and (at least at first sight) good output. ChatGPT is certainly not the only current successful application of artificial intelligence (“AI”); we see AI everywhere. Examples are self-driving cars, suggestions in your social media feed, chess-playing computers, or CV selection tools. Businesses will have to embrace AI technology in order to keep their competitive position. The ever-expanding scope of AI also gives rise to questions about its desirability; how can AI be used in a way that is both ethical and transparent? Questions raised from a more legal perspective are: how are fundamental rights like the right to privacy guaranteed; how can the use of AI be supervised; and what about liability? There are enough reasons for the (further) regulation of AI. Where does regulation stand now, and what does this mean for businesses that use AI applications?
The European Union and AI
Back in February 2020, the European Commission already presented the European approach to artificial intelligence, with the purpose of encouraging the development and use of AI, but in ways that are safe and fair. The first proposal for the AI Regulation was published in April 2021. Such an ‘AI Act’ was the first of its kind, and the result of this great challenge has been met with much praise. However, there were also critical comments: the drastic assessment and documentation requirements were said to burden companies with excessive compliance costs; although the definition of ‘AI’ was very broad, ‘general purpose AI’(such as the ‘Iarge language models’ of ChatGPT) was excluded from the scope of the AI Regulation; etc. Following much debate on these topics in the past two years, the amended proposal of the AI Regulation was published on 11 May 2023. The framework of the AI Regulation is now clear. The Regulation is expected to be finalised in 2024, after which it will enter into effect step by step in the years from 2024 to 2026, and businesses will have to comply with it.
Implications of the AI Regulation
Although the entry into effect of the AI Regulation will still take a while, preparations to its implementation cannot be postponed. It will be a time-consuming process to map the use of AI in an organisation, to implement risk and quality management, and to draw up the necessary documentation. All this will add up to a compliance process that can be compared in volume to the implementation of the General Data Protection Regulation (GDPR).
In this process, the obligations to which an organisation is subject will depend on the risk of the AI system and the role of the organisation. The AI Regulation applies a risk-based distinction: there are prohibited AI systems and there are AI systems with high risk, certain risks, and low risk. Depending on the classification, obligations are in place intended to promote the safe development and use of AI systems, such as developing technical documentation and meeting quality requirements for training data sets. In terms of obligations, the focus is on high-risk systems. A distinction is also made between providers, distributors and users of AI systems. It is mostly the providers of AI who will be confronted with many new obligations. On the other hand, drastic obligations will also be introduced for businesses that use AI – as a majority will do. This means that using the (seemingly) endless opportunities of AI is not without obligation. If a business fails to comply with its obligations, the responsible supervisory authority may impose sanctions, with fines that may run up to 30 million Euro or six percent of the global turnover; this is significantly higher than under the GDPR, for example.
The user may also be held liable for any damage caused by the use of AI. In this context the AI Liability Directive, which is also being prepared at the moment, and the Product Liability Directive , for which amendments were recently proposed, are relevant. In brief, these directives contain rules intended to protect consumers against AI and to make it easier for consumers to hold a user or provider of AI liable.
Given the risks and the fact that the coming regulations contain obligations whose implementation will take much time, it is advisable to start this process early and to seek advice on it.
Current regulations and risks of AI
While we are waiting for the AI Regulation to enter into effect, this does not mean that the use of AI is now unregulated. On the contrary, this use is governed by several laws and regulations: human rights conventions, the GDPR, laws on intellectual property and the use of data.
From a perspective of human rights and privacy, the general principles lying at the heart of this are fairness, accountability and transparency. In other words: the use of AI must be fair and non-discriminatory, the use and functioning of the system must be capable of explaining and accounting for, and the system must be transparent. If a company is already using or offering AI, it is recommended to observe these principles already now. Compliance with these general principles will also facilitate the transition to the AI Regulation, since these principles are also at the heart of in the obligations contained in the Regulation.
An example from practice of how the implementation of AI may conflict with these general principles is a (former) recruitment tool of Amazon. On the basis of historical data, this tool had taught itself that men were preferable candidates, and the tool automatically rejected more women. With this bias, the tool would infringe a range of existing rights and obligations, such as the ban on discrimination, the right to equal treatment, and the ban on automated individual decision-making. Under the AI Regulation, such selection tools qualify as high-risk AI systems, which means that their providers will have to meet a large set of obligations, including risk management, data governance and provision of information, and that their users will also have obligations, such as monitoring the functioning of the AI system and keeping logs.
Conclusion
The use of AI is booming, with regulation following close at its heels. Companies should pro-actively prepare for the implementation of the AI Regulation, taking into account its impact on their AI applications. While the use of AI in business operations opens enormous opportunities, it is essential to strike a balance between innovation and responsible use.