Employment Law07.01.2025 Newsletter

Application of the AI Regulation as of 2 February 2025

The European Regulation 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence ("AI Regulation") came into force last year. It will apply in stages. Chapters I and II of the AI Regulation are already applicable as of 2 February 2025, and thus in particular the prohibition of certain practices in the field of artificial intelligence and the obligation of providers and deployers of artificial intelligence systems to provide their users with a sufficient level of so-called "AI literacy". Kathrin Vossen, Dr. Axel Grätz and Annabelle Marceau explain what this means and what measures need to be taken by 2 February 2025. .

1. Background 

The AI Regulation entered into force on 1 August 2024 and provides for a staggered entry into force of its provisions (Article 113 AI Regulation). The first stage will be reached on 2 February 2025. As of this date, Chapters I and II of the Regulation will be binding:

(a) Chapter I defines the purpose and scope of the AI Regulation. The AI system, as the centrally regulated topic, is defined in Article 3 No. 1 of the AI Regulation. According to the basic concept of the AI Regulation, every AI system is to be assigned to a risk class. The risk class determines the content and extent of the specific obligations for providers and deployers of AI systems. Article 4 of the AI Regulation obliges providers and deployers of AI systems of all risk classes to procure AI literacy.

(b) Chapter II contains in its sole Article 5 the prohibition of certain AI practices that pose a clear threat, e.g. because they deceive or deliberately exploit certain char-acteristics of individuals for manipulative purposes.

2. Obligation to procure a sufficient level of AI literacy, Article 4 AI Regulation

2.1 Concept of AI literacy

According to Article 4 of the AI Regulation, providers and deployers of AI systems must take measures to procure a sufficient level of AI literacy among persons who are involved in the operation and use of AI systems on their behalf, for example as part of their employment relationship. The Regulation clarifies that "AI literacy" means skills, knowledge and understanding that allows providers, operators and affected persons to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause (see Article 3 No. 56 of the AI Regulation).

2.2 Content and scope of AI literacy

The aim of AI literacy is to have the skills for dealing with AI, knowledge of the technology and an understanding of the respective use case. Given the complexity of AI, there is a particular need to clarify how far-reaching the technical knowledge of staff needs to be.

The purpose of the Regulation is to ensure an appropriate level of protection of the health and safety of those affected by the use of an AI system. Article 4 of the AI Regulation primarily has a risk-minimising function. Accordingly, the content of the knowledge transfer needs to be adapted to (i) the addressees, (ii) their role in the use of an AI system and (iii) the specific use case of the AI system (see recital 20 sentence 2 of the AI Regulation). For this reason, the developers of a high-risk AI system programmed specifically for their own company must, for example, be given more in-depth and comprehensive training than the users of a purchased, standardised low-risk AI system.

At the same time, the individual company must also be included in the assessment: a start-up will not be able to invest the same resources as a global corporation in its measures to procure AI literacy.

2.3    Measures to procure AI literacy 

Providers and deployers are now faced with the challenge of taking suitable measures to procure AI literacy. Here, they can fall back on familiar compliance tools:

(a) Internal guidelines and standards: Internal company guidelines on the use of AI systems, which are mandatory for all employees, support the creation and maintenance of a sufficient level of AI literacy.  A corporate culture for dealing with AI is ultimately developed through uniform standards that are continuously monitored and consolidated.

(b) Further education and training: Technology and its further development are dynamic. The associated acquisition of AI literacy is also a dynamic process. Starting with basic training, further training on AI systems will ensure staff’s sustained AI literacy.

(c) Certification: Service providers are increasingly offering AI workshops with certification. In future, certificates from renowned institutes can serve as proof of employees' AI literacy.

(d) Internal contact points: Vertical knowledge transfer can be supported by central contact points within the company, e.g. by appointing internal AI officers.

(e) Voluntary codes of conduct: In future, voluntary codes of conduct are also to be published in order to promote the AI literacy of persons involved in the development, operation and use of AI (see recital 20 sentence 5 of the AI Regulation)

3.    Prohibition of certain practices in the field of AI, Article 5 of the AI Regulation 

In addition, certain practices in the field of AI are to be prohibited from 2 February 2025 onwards. These include, for example, AI systems with manipulative and deceptive techniques, social scoring and systems for recognising emotions.

Even if most of the responsible parties are likely to deny the use of such practices in their companies, a critical look should be taken at the AI systems already in use. For example, it is already possible to ask AI applications for information on the mood of individual participants in an online meeting in order to determine, for example, their excitement, anger, amusement or sadness. Only in cases where these AI systems are to be used for medical reasons, for example for therapeutic purposes, or for safety reasons, for example to recognise tiredness when driving a vehicle, can this practice be permissible.

4. Concrete need for action

As 2 February 2025 is drawing near, there is a concrete need for action. We recommend taking the following measures immediately: 

  • Examination of already used or planned AI systems with a view to prohibited practices under Article 5 of the AI Regulation;
  • Identification of employees and other third parties who come into contact with AI systems in the company and their division into groups according to their level of knowledge in order to identify specific training needs;
  • Drafting of a training plan (including necessary follow-up training);
  • Implementation of a code of conduct for dealing with AI;
  • Appointment of AI officers for the coordinated management of all necessary AI measures; 
  • Consideration of any co-determination rights of works councils.

In particular, the obligation to procure a sufficient level of AI literacy is likely to depend heavily on the specific circumstances at the organisation in question.

We would be pleased to support you in implementing the requirements of the AI Regulation in your company! 

Back to list

Kathrin Vossen

Kathrin Vossen

PartnerRechtsanwältinSpecialized Attorney for Employment Law

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 352
M +49 173 3103 154

Email

LinkedIn

Annabelle Marceau

Annabelle Marceau

Junior PartnerRechtsanwältinSpecialized Attorney for Employment Law

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 347
M +49 172 4610 760

Email

LinkedIn