What does the EU AI Act cover?

The Act itself defines an AI system as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.'
 

The Act outlines four levels of AI risk:

  • Unacceptable risk: this applies to AI systems and practices which are deemed incompatible with EU values and fundamental human rights. This includes the use of deliberately manipulative or deceptive techniques; systems designed to exploit vulnerable groups of people; the use of “social scoring”, and the creation or expansion of facial recognition databases through scraping of images from the internet or CCTV footage.
  • High risk: these are the most highly-regulated systems permitted. Examples include biometrics, the use of AI in critical infrastructure, law enforcement and the administration of justice. The Act requires these systems to meet stringent requirements before they can be put on the market.
  • Limited risk: this includes AI systems where there is a risk of manipulation or deceit, for example “deepfakes” or chatbots. These systems must be transparent, and users must be informed that they are interacting with an AI system.
  • Minimal risk: this is unregulated and covers all systems which do not fall into the above categories. However, the Act recommends following general principles of fairness and non-discrimination.

Who will be affected?

The Act applies to all providers of AI systems, meaning organizations which develop and market AI systems or provide such systems under their own name or trademark, whether commercially or free of charge. It also covers importers and distributors of AI systems in the EU, as well as “deployers”, a term which encompasses any legal or natural entity using AI professionally.

Certain types of AI systems, whether based inside or outside the EU, are excluded from the EU AI Act:

  • Systems developed exclusively for military purposes
  • Systems used by public authorities or international organizations in third countries for law         enforcement or judicial cooperation
  • Research and development systems
  • Individual use of AI for personal activities
     


The UK’s AI Regulation White Paper: a different approach

In contrast to the EU AI Act’s risk-based framework, the UK government has taken what it describes as a “proportionate, context-based” approach to AI regulation. The UK’s white paper consultation response, published in February 2024, is based on five cross-sectoral principles which regulators are advised to apply within their own sector-specific domains:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

This approach is intended to be both pro-innovation and pro-safety, providing for advances in cutting-edge applications such as generative AI and Large Language Models (LLMs) whilst ensuring a level of safeguarding to mitigate against harm. Unlike the EU, however, the UK government has made it clear that it does not intend to introduce legislation at this stage, considering that the existing regulatory ecosystem provides sufficient checks and balances.
 

AI regulation: the future for the UK

According to a 2023 YouGov survey, 50% of business decision-makers felt that the future regulation of AI is a key risk or limitation of the technology. In addition, 46% expressed awareness of the consequences of using invalid or biased data in their AI systems, while 43% were concerned about data and cybersecurity risks. There’s clearly a need for some kind of guidance around both the possibilities and the risks of AI systems.

As the first legislation of its kind, the EU AI Act may turn out to have a ripple effect on UK firms, even if they only operate in the domestic market. It’s expected to establish a global benchmark in AI legislation, much as the General Data Protection Regulation (GDPR) has done for data protection. As the EU AI Act gradually comes into force, we may well see UK legislation aligning with it for the sake of consistency. So, even though the UK is no longer in the EU, it makes sense for UK companies to consider their compliance with the requirements of the Act.