EU AI Act and What it means for you

With 72% of companies already using AI in some way (McKinsey), the European regulators adopted a legal framework establishing common rules for AI systems in order to protect the EU citizens from the risks associated with this technology.

This means that all companies based or operating in the EU or providing or using AI to process the data of European citizens will have to review what it means for them and undergo necessary changes.

The fines for non-compliance with the Act range from €7.5 million to €35 million or even  7% of annual turnover so it will become imperative for companies to review their practices.

Exceptions

Before we dive deeper into this, the Act does not apply to:

  • Individuals using AI as part of personal, non-professional activity
  • AI systems released under free and open-source licenses (excluding prohibited AI, high-risk AI, and GPAI models under certain conditions)
  • AI used for military defence or national security purposes
  • AI used by public authorities outside the EU and international organizations (where international agreements are in place)
  • AI systems and models developed for scientific research & development
  • Research testing and development of AI systems or models

What could this mean for you?

To put it simply, the act recognises 4 types of AI systems based on the level of risk and 4 roles companies can fall under based on their relationship to each AI system (provider / deployer / distributor / manufacturer).

Based on the system type and your role in it, different rules apply.

 

Classification of AI systems:

1. Prohibited AI Systems

AI applications that manipulate behaviour, exploit vulnerabilities, or use biometric identification in a harmful manner. Examples include social scoring systems and certain biometric categorization tools. These systems need to be phased out within 6 months (post-publication).

2. High-Risk AI Systems

Systems that pose significant risks to health, safety, or fundamental rights. They include AI used in critical infrastructure, law enforcement, and employment-related applications. High-risk systems must undergo rigorous conformity assessments and adhere to strict compliance requirements.

High-risk systems will require the most attention regarding AI governance and action under the AI Act for companies.

3. Limited Risk AI Systems

These systems pose a moderate risk and are subject to specific transparency requirements. Examples include chatbots and emotion recognition systems, where users must be informed they are interacting with AI.

4. Minimal Risk AI Systems

These pose little to no risk and face minimal regulatory requirements. Examples include AI-powered games and spam filters.

Role classification

Providers:

  • Who place on the EU market or put into service AI systems or place general purpose AI models in the EU.
  • Developing an AI system or a GPAI (General Purpose AI) model and placing them on the market or putting the system into service.

Deployers:

  • Who have their place of establishment or are located in the EU.
  • Using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

Importers/Distributors:

  • Who make an AI system available on the EU market.

Product Manufacturers:

  • Who place or put into service an AI system in the EU with their product and under their name or trademark.

What can you do now

If you are unsure what to do under the EU AI Act, or you suspect some of your systems may be high-risk, we will be happy to help you through our consulting & legal practice focused specifically on the EU AI Act and AI Governance. Just get in touch with us at contact@blindspot.ai.

The Act applies to all systems using AI (whether your company developed it, bought it from a provider, or is paying for a licence/subscription). Every single system should be checked and dealt with under the act.

This is the EU AI Act implementation timeline:

 

Key Timelines for Implementation

  • 2024 (6 months post-publication): Prohibited AI systems must be phased out.
  • 2025 (12 months post-publication): Governance obligations for general-purpose AI (GPAI) become applicable, and penalties start applying.
  • 2026 (24 months post-publication): All rules of the AI Act, including obligations for high-risk systems, become enforceable.
  • 2027 (36 months post-publication): Obligations for high-risk AI systems under EU harmonization legislation take effect.

 

That’s all for today’s issue. Should you have more questions regarding the EU AI Act, please, feel free to let us know at contact@blindspot.ai.

Zůstaňme v kontaktu

Ozvěte se nám