AI Act adopted: a look at the EU Parliament's revised version

The EU Parliament adopted the AI Act on Wednesday, 13 March 2024. It now only needs to be confirmed by the Council. The AI Act could therefore come into force in June 2024. Many provisions have been postponed in the EU Parliament's revised version. This makes it all the more worthwhile to take a closer look at the updated text of the regulation.

The AI Act provides for a staggered entry into force of its provisions (see Art. 113 AI Act):

  • The general provisions and regulations on prohibited AI (Chapters I and II of the AI Act) are effective six months after the AI Act enters into force, Art. 113 (a) AI Act.
  • The regulations on high-risk AI are generally effective 24 months after the AI Actenters into force (Art. 113 AI Act). The regulations contained in Chapter III Section 4 and Chapter VII already apply twelve months after the AI Act enters into force, Art. 113 (b) AI Act. High-risk AI systems that are subject to regulation within the meaning of Annex I of the AI Act must take the relevant regulations into account after 36 months (Art. 113 (c) AI Act).
  • The special regulations on GPAI models (Chapter V) take effect twelve months after the AI Act enters into force, Art. 113 (b) AI Act.
  • The remaining provisions are effective 24 months after the AI Actenters into force, Art. 113 AI Act.

      The AI Act differentiates between prohibited systems with an

  • "unacceptable risk" (see Section 0), so-called high-risk systems, which make up the central regulatory object of the Act (see Section 0),
  • AI systems with a low risk (Section 0) and
  • basic AI models (General Purpose AI), which can be used in a variety of ways and therefore cannot be easily assigned to a risk class (see Section 0).                                                                                                                                                                                                                               

1. Risk class "unacceptable risk" 

AI systems with an unacceptable risk may not be placed on the market, put into operation or used. According to Art. 5 AI Act, this includes, for example, systems that use manipulative techniques or exploit personal weaknesses with the aim of persuading people to engage in harmful behaviour. Such harm can be of a physical, psychological or even financial nature (Recital 29 AI Act). This prohibition thus addresses, for example, AI systems that specifically target older people and exploit their financial insecurity in order to sell riskier financial products.

Art. 5 AI Act does not apply to AI systems that are placed on themarket or operated exclusively for military, defence or national security purposes or that are developed and put into operation for the sole purpose of scientific research and development, Art. 2 (3) and (6) AI Act.

2. Risk class "high risk" 

High-risk AI systems within the meaning of Art. 6 AI Act pose a high risk to the health and safety or fundamental rights of natural persons and are listed exhaustively in Annexes I and III to the AI Act. This initially includes systems that fall under the EU product safety regulations listed in Annex I to the AI Act and may only be placed on the market or put into operation under the conditions described there, e.g. AI systems that are incorporated into toys, medical devices or protective equipment. In addition, Annex III to the AI Act lists various areas in which the use of the systems results in categorisation as high-risk AI.

The reference in Art. 6 (1) (b) AI Act to Annex II as opposed to Annex III, the last version of which still named the areas of application of high-risk AI systems, is apparently a drafting error.

The most important example is AI systems in the area of HR, which are used by employers, for example, during the application process to recruit or select their employees. Pursuant to Section 87 (1) No. 6 of the German Shop Constitution Act [Betriebsverfassungsgesetz, BetrVG], the introduction and use of such AI systems is subject to co-determination. In this context, the Labour Court [Arbeitsgericht, ArbG] of Hamburg (decision of 16 January 2024 - 24 BVGa 1/24) recently ruled that the use of ChatGPT via employees' private accounts is not subject to co-determination. However, the co-determination obligation does indeed exist if the employer provides such accounts.

According to Art. 7 of the AI Act, the Commission may add further high-risk systems to Annex III.

2.1 Guidelines for the design and development of high-risk AI systems

Art. 8 to Art. 15 AI Act set requirements for the design and development of high-risk AI systems:

  1. According to Art. 9 AI Act, the provider must identify and assess potential risks of the system and limit them through risk management measures (risk management system) in order to reduce the residual risks to an acceptable level. When applied to AI systems for employee management, company guidelines in particular, which regulate the authorisations and use of the systems, can reduce the risks of incorrect use (cf. Art. 9 (5) (c) AI Act). 
  2. High-risk AI systems may only be trained with data that fulfils certain quality requirements, Art. 10 AI Act. Specifically, this data should be relevant, sufficiently representative and as error-free and complete as possible with regard to its purpose. With these generic statements, the legislator wants to prevent so-called bias, for example self-driving cars deciding against certain groups of people in the event of collision conflicts or certain applicants being automatically excluded in the application process because they were not represented in the data set. This also requires the processing of sensitive data such as health data, which is generally prohibited under Art. 9 (1) GDPR. Recital 70 AI Act now clarifies that the processing of this data to prevent bias in AI constitutes an important public interest within the meaning of the authorisation criterion of Art. 9 (2) (g) GDPR.
  3. Art. 11 to 13 of the AI Act contain requirements for technical documentation and record-keeping obligations, which serve,among other things, to prove the system's compliance with the requirements of the AI Act. In addition, high-risk AI systems must be designed and developed in such a way that they can be effectively monitored by natural persons, Art. 14 AI Act. In view of the so-called black box of AI systems, this "effective monitoring" cannot mean that providers must be able to understand and consequently correct all processing operations of the system. Accordingly, the supervisory measures are also linked to an AI system’s degree of autonomy, see Art. 14 (3) AI Act. In the case of fully autonomous systems, supervision should therefore be limited to monitoring and checking the output and, if necessary, require the system to be shut down (Art. 14 (4) AI Act). It is also conceivable that operating restrictions could be installed that the system cannot override independently (see Recital 73 AI Act).   
  4. Finally, the systems must be developed in such a way that they have an appropriate degree of robustness against environmental influences, such as new usage contexts or cyberattacks and even errors within the algorithms used, Art. 15 AI Act. This is particularly aimed at AI systems based on artificial neural networks, as they can theoretically continue to "learn" and change their topology, making them susceptible to the injection of faulty data sets. For example, attacks on an AI system through repeated confrontation with unexpected situations are conceivable, as a result of which the AI takes these unusual situations as a standard, for example in the case of repeated coordinated share sales. The influence of such anomalies on the pattern recognition of a system needs to be limited (see Recital 75 AI Act). 

2.2 Programme of obligations for suppliers, importers, distributors and operators of high-risk AI systems

Art. 16 to Art. 27 AI Act oblige providers, importers, distributors and operators of high-risk AI systems:

  1. Providers and operators of AI systems are subject to the most extensive obligations. Providers develop an AI system or GPAI model and place it on the market or put it into operation under their own name or brand, Art. 3 No. 3 AI Act. Operators use AI systems on their own responsibility as part of a professional activity, Art. 3 No. 4 AI Act. Providers are therefore generally developers of AI systems, while operators use third-party systems commercially. Operators may be subject to provider obligations if they subsequently affix their name or trademark to a high-risk AI system or significantly modify the system, Art. 25 AI Act.
  2. Providers must also take immediate corrective action if a high-risk AI system does not comply with the AI Act. In this case, providers may be obliged to recall and deactivate their system, Art. 20 (1) AI Act.
  3. Similar to Art. 27 GDPR, providers of high-risk AI systems established outside the EU must appoint an authorised representative established in the Union, Art. 22 AI Act.
  4. Art. 23 AI Act obliges importers of high-risk AI systems, in line with the dual control principle, to check the implementation of various obligations, such as the documentation obligation under Art. 11 AI Act, before placing the system on the market. This inspection is supplemented by a further inspection of distributors, Art. 24 AI Act.
  5. Operators must continuously monitor and check the high-risk AI systems, see Art. 26 AI Act. On the one hand, they must take appropriate technical measures to be able to operate the system safely and, on the other hand, provide the intellectual resources to enable the responsible persons to supervise it (Art. 26 (1) AI Act). This obliges operators to train their staff in the technology and handling of the system. According to Art. 26 (4) AI Act, operators are also obliged to check the quality of its data. They must ensure that the input data is relevant and sufficiently representative with regard to the area of application of the system. The legislator is once again evidently addressing systems that are based on artificial neural networks and are therefore susceptible to overfitting or underfitting, i.e. they adapt to certain data sets and can therefore deliver incorrect results under certain circumstances.
  6. Operators that use AI systems to assess the creditworthiness of individuals or intend to influence the pricing of their life and health insurance policies must carry out an impact assessment of fundamental rights in accordance with Art. 27 of the AI Act. This instrument is intended to identify and assess the particular risks and effects of the system, especially on the fundamental rights of potentially affected persons. The systems will especially have to be examined for possible violations of the principle of equality under Article 3 of the German Constitution [Grundgesetz, GG].

2.3 Special rules for notifying authorities and certificates

Art. 28 to Art. 39 of the AI Act are aimed directly at notifying authorities and bodies that use high-risk AI systems. Furthermore, Art. 40 to Art. 49 of the AI Act contain regulations on conformity assessments, certifications and registrations for these high-risk AI system.

3. Special rules for notifying authorities and certificates

If AI systems are not categorised as high-risk AI systems or are prohibited, these are AI systems with low or minimal risk. According to Art. 95 of the AI Act, the AI Office and the member states are to promote and facilitate the establishment of codes of conduct on voluntary compliance with certain provisions of the AI Act. Transparency obligations also apply to AI systems that communicate with humans, Art. 50 AI Act. The user of the AI must be informed that they are interacting with AI.

4. GPAI models

The new version of the AI Act contains the category of so-called General Purpose AI models ("GPAI models"), see Chapter V AI Act. These GPAI models are characterised by their general purpose, cf. Recital 97 AI Act. This also includes, for example, GPT models on which ChatGPT is based. The AI Act recognises GPAI models as the basis for a usable AI system to be developed from them: a GPAI model only becomes an AI system by combining the GPAI model with components such as a user interface or by implementing it in another AI system, cf. Recital 97 AI Act. Due to these further development possibilities and their wide range of applications, the use of these models can lead to various risks.

  1. The transparency obligations under Art. 50 of the AI Act and special obligations for their providers under Art. 53 of the AI Act fundamentally apply to GPAI models. When using this AI, natural persons must in particular be able to recognise that they are interacting with AI.
  2. GPAI models may also be categorised as high-risk AI systems as a result of being implemented in another AI system, see Recital 85 AI Act. The corresponding extended programme of obligations therefore applies to them.
  3. In addition, Art. 51 of the AI Act addresses "GPAI models with a systemic risk", the commissioning of which triggers additional obligations to minimise the risk in accordance with Art. 55 of the AI Act. The categorisation is mainly based on whether these models have capabilities with a high impact, but can also be based on a decision by the Commission if it considers a model to be equivalent thereto. The most advanced GPAI models currently available achieve the level of impact required for systemic risk GPAI models. The obligations of GPAI model providers under Art. 53 of the AI Act include, for example, the creation and updating of technical documentation for the model, a summary of the data used to train the model or the introduction of a copyright compliance policy.
  4. Providers of GPAI models with a systemic risk must carry out model evaluations, assess or mitigate the systemic risk and have an appropriate level of cyber security and physical infrastructure in accordance with Art. 55 of the AI Act.

 

Are you interested in other topics relating to AI systems in your company? Oppenhoff’s interdisciplinary AI Taskforce ensures your compliance from the outset with the requirements of the EU AI Act, as well as the many other legal requirements for AI, and that AI systems can be used in a legally compliant manner. 

 

 

Back to list