The new EU law on AI
- 6. August 2024
- Posted by: Mutke Müller
- Categories: IT and Data Protection Law, Labour Law
The new EU law on the regulation of artificial intelligence in the EU has been passed and will now gradually come into force. It is the first law in the world to regulate the use of AI systems. The aim of the regulation is to establish clear rules for the use of AI-controlled-systems in order to ensure that they are used in a way that respects the fundamental rights of citizens in the EU. At the same time, competition in the EU shall be promoted and Europe’s position in the global AI competition strengthened.
Due to the comprehensive provisions of the AI Act, it is already worthwhile for employers to familiarize themselves with the catalog of obligations.
The AI Act provides for the following regulations:
- Scope of application
The AI Act defines an AI system as a machine-based system that is designed to operate with varying degrees of autonomy and that can generate results such as predictions, recommendations or decisions for explicit or implicit goals that influence physical or virtual environments.
The addressees of the AI Act are initially providers of an AI system, i.e. natural or legal persons who develop an AI system or have it developed in order to market it or put it into operation under their own name or brand name. However, operators of an AI system are also obliged. Operators are defined as natural or legal persons who use an AI system under their own responsibility, unless it is used solely for personal activities. Employers will generally be operators within the meaning of the regulation if they use AI systems in the personnel area. However, something different may apply if employers change the intended purpose of an AI system or make significant changes to an AI system. In this case, they could also be considered a provider within the meaning of the AI Act and be subject to the corresponding regulations for providers. In geographical terms, the law applies to all companies based within the EU. In addition, the law also applies regardless of the operator’s location if the AI system is used for users in the EU.
- Catalog of obligations
The AI Act follows a risk-based approach that classifies AI systems into three risk categories. The AI Act’s list of obligations for providers and operators depends on the risk level of the AI system. The regulation essentially distinguishes between systems with an unacceptable risk, systems with a high risk and systems with a low or minimal risk.
a) AI systems with an unacceptable risk:
The Act prohibits the placing on the market, putting into service and use of AI systems that are considered incompatible with the fundamental rights of the European Union. These are, for example, AI systems that aim to manipulate behavior and exploit weaknesses. AI systems that evaluate people based on personal characteristics such as race, gender, religion or political beliefs are also subject to this ban. In addition, this could include, for example, systems that are intended to encourage employees to work harder, for example through “reward programs”. Furthermore, the ban could apply in cases where employers want to use AI systems to evaluate employees, for example to decide whether to release or transfer the employee. Such a practice could violate the ban on so-called “social scoring”. AI systems that infer the emotions of a natural person in the workplace and in educational settings are also considered an unacceptable risk and are prohibited.
In future, employers will have to check whether the AI system they use violates the prohibitions described above. This is particularly important due to the fact that a breach could result in severe fines. The AI Act provides for fines of up to 35 million or 7 % of a company’s total global annual turnover in the previous financial year.
b) AI systems with a high risk
For high-risk AI systems, the AI Act sets out strict requirements and obligations for providers, operators, distributors and importers as well as other stakeholders along the AI value chain. These systems are likely to be particularly relevant in the context of employment law.
The category of high-risk systems includes AI systems that are themselves products or safety components of products that fall under the EU harmonization legislation listed in Annex II of the AI Act and, as products or safety components of a product, must be subject to third-party conformity assessment with regard to health and safety with a view to being placed on the market or put into service in accordance with the harmonization provisions of the Annex to the AI Act. This includes, for example, cars, airplanes, elevators and toys. In addition, generative AI models such as virtual assistants and personalized recommendations may also fall under this category. Furthermore, Annex III of the AI Regulation standardizes AI systems that also fall under the term high-risk systems. These include systems that are used for remote biometric identification, systems that are to be used for biometric categorization according to sensitive or protected attributes or characteristics based on inferences about these attributes or characteristics and systems that are to be used for the recognition of emotions. In addition – and much more relevant in the employment law context – the Annex also covers AI systems intended to be used for the recruitment or selection of natural persons, in particular for the placement of targeted job advertisements, the analysis and filtering of job applications and the assessment of job applicants; systems intended to make decisions affecting the terms and conditions of employment, the promotion or termination of employment, to assign tasks based on individual behavior or personal characteristics or traits, or to monitor and evaluate the performance and conduct of individuals in such relationships.
The obligations that companies have in relation to high-risk systems are determined by the role that the company plays in the use of high-risk AI systems.
Providers of high-risk systems must, among other things
- prepare detailed technical documentation before introducing a high-risk AI system and keep it up to date. The documentation should demonstrate that the AI system meets the requirements of the law,
- establish and maintain a risk management system,
- ensure that it is possible for the system to be monitored by a natural person during use.
Operators of high-risk systems must
- take appropriate technical and organizational measures to ensure that the AI systems are used in accordance with the attached instructions for use;
- entrust a person trained for this purpose with the human supervision of the system and provide them with the necessary support. Due to the wording of the provision, this probably does not have to be an employee; in principle, the task could also be assigned to an external third party;
- inform the affected employees and, if applicable, employee representatives (this is likely to include the works council, for example) before commissioning or using an AI system with a high risk in the workplace that an AI system is being used.
- inform affected employees that they are subject to a high-risk AI system that makes decisions concerning them or provides support for such decisions.
In addition, data subjects have a right to information about the role of the system in the decision-making process and the key elements of the decision. The prerequisite for this right to information is that the employer, as the operator, has made the decision based on data from a high-risk AI system; this decision has a legal effect or significantly affects the data subject in a similar way and that the data subject has stated that, in their opinion, the decision has an adverse effect on their health, safety or fundamental rights. Violations of operator obligations can result in a fine of up to 15 million euros or 3% of global turnover.
c) AI systems with a low or minimal risk
This includes AI systems such as chatbots. AI systems with a low risk are subject to less stringent requirements. However, they must also be designed in such a way that interaction with them is obvious to users.
The AI Act entered into force on August 1, 2024, twenty days following its publication in the Official Journal of the European Union and will be fully applicable 24 months after its entry into force, with a few exceptions. The ban on AI systems that pose unacceptable risks will apply six months after the Regulation enters into force, codes of conduct nine months after entry into force, rules for artificial intelligence with general purposes twelve months after entry into force and obligations for high-risk systems 36 months after entry into force.