EU AI Act: what is it exactly?

At the end of 2022, the development of artificial intelligence (AI) attracted great public attention with ChatGPT (Chatbot Generative Pre-trained Transformer). Version 4 has been around since March 2023, and the first version was released in June 2018. Back in March of the same year, the Commission of the European Union (EU) set up an expert group to develop a proposal for guidelines on AI ethics, among other things.

This has since developed into a proposal for a European law on artificial intelligence, named the EU AI Act. In June 2023, the European Parliament adopted its negotiating position on this law. Meanwhile, talks are ongoing with countries in the EU Council on the final form of the law.

This initiative - the first of its kind in the world - has two main objectives: firstly, to promote the development of AI and strengthen the EU's global competitive potential in this area; secondly, to ensure that the technology is both human-centred and trustworthy. In order to achieve these objectives, artificial intelligence developed and used in the EU must comply with the rights and values of the European Confederation. The EU AI Act is intended to create the legal framework for this.

Note, however, that the proposed law is an ordinance: this means that it is directly applicable in EU member states, but does not have to be implemented in national law. When it comes into force, the regulation then becomes part of national law and can be enforced in the respective countries.

In brief: the main points of the law

The essence of the law is the classification of AI systems according to their risks, especially those relating to health, safety and the fundamental rights of people. Accordingly, the EU AI Act stipulates four levels of risk:

  • Unacceptable
  • High
  • Limited
  • Minimal

AI systems with unacceptable risk are prohibited since they pose a threat to people and manipulate their behaviour: for example, voice-activated toys, which can encourage dangerous behaviour in children. “Social scoring” - creating risk profiles of individuals based on surveillance - is also classed as unacceptable, as are real-time and remote biometric identification systems, such as facial recognition. In principle, this classification covers all applications that violate the right to dignity, non-discrimination, equality and justice.

High-risk AI systems have a negative impact on security or fundamental rights. This can relate to software for aviation, cars, medical devices and elevators, provided they fall under the EU Product Safety Act, but also to systems from eight specific areas, which must be registered in an EU database. These are as follows:

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employers, workers management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

All high-risk AI systems are to be assessed both before market launch and throughout their complete life cycle.

AI systems with limited risk must be designed in a transparent manner, so that people who encounter them can detect the AI behind them. For example, systems such as ChatGPT should disclose that their content is AI-generated and at the same time offer protective measures to prevent the generation of illegal content.

Minimal or low-risk AI systems must comply with existing legislation. Examples include spam filters and video games.

Involvement in prohibited AI practices can result in a fine of up to 40 million euros or an amount of up to seven percent of a company's annual global turnover, whichever is higher.

This goes much further than Europe's most important data protection law to date, the General Data Protection Regulation (GDPR). This allows for fines of up to 20 million euros or up to four percent of a company's global turnover. However, if there are several violations, this can add up,  as Facebook parent company Meta frequently experienced. By May 2023, the company had incurred a total of 2.5 billion euros in fines due to various violations of GDPR rules.
 

Who does the EU AI Act affect?

The law applies largely to providers and operators of AI systems. In addition, the EU Parliament's current draft provides for importers and retailers of AI systems to be included, as well as representatives of AI system providers based in the EU.

Providers are parties who develop AI systems in order to market them or operate them in the EU (e.g. OpenAI). This applies regardless of whether they are based within the EU. In addition, the bill aims to allow providers who operate AI systems outside the EU to be covered if the developer or distributor of the AI system is based in the EU.

Operators of AI systems, on the other hand, are natural or legal persons who use AI as part of their professional activities. They may use APIs (application programming interfaces) to embed AI products into their own products, or simply use AI systems as internal tools. Providers and operators of AI systems based outside the EU may also be covered if the results generated are to be used in the EU.

People who use AI systems in the context of private, non-professional activities are not affected under the law.

This also applies to certain categories of AI systems, including, for example, those used for research, testing and development purposes, or for software which is developed or used exclusively for military purposes.

What does the law mean for companies?

Anyone who develops, deploys or professionally uses AI in the EU, or for an EU clientele, will be affected by the law to some extent. However, due to the way the law defines high-risk AI systems, most commercial applications are unlikely to fall into this category. For example, the development of AI systems for the purpose of music editing or for video games must be differentiated from systems intended to influence people via social media or voters in political campaigns.

However, the increasing popularity of generative AI systems means that more developers may fall within the scope of the law. If the current changes are passed, these developers (but not operators) will probably have to comply with certain requirements, even though they don’t fall into the high-risk category.