Since August 1, 2024, the EU AI Act (EU Regulation 2024/1689) has been the binding legal framework for the use of artificial intelligence (AI) in the European Union. The aim of the regulation is to promote safe, transparent and human-centered AI systems – particularly in sensitive areas such as justice, healthcare and human resources management.

EU AI Act

Who is affected?

The regulations apply to all companies that develop, distribute or use AI systems - regardless of whether they are based in the EU or not.

What are the key requirements of the EU AI Act?

  • Prohibited AI applications: Systems for manipulative influencing, social scoring or real-time monitoring in public spaces are prohibited.
  • High-risk AI: Applications that affect fundamental rights or public safety are subject to strict testing, documentation and transparency obligations.
  • Generative AI: Content created by generative AI must be clearly labeled as AI-generated.
  • Obligation to qualify: Companies must ensure that all employees who work with AI have sufficient specialist knowledge.
  • Market surveillance & sanctions: Compliance is monitored by official controls. Violations can result in high fines.

Implementation in practice

The first implementation phase has been running since February 2025. Companies are obliged to adapt their internal processes, systems and training to the new requirements. Especially in high-risk areas, an in-depth, application-related understanding of the functioning, risks and limits of AI is required.

Opportunities for companies

The EU AI Act takes a balanced approach: it is intended to enable innovation while protecting consumer rights and ethical standards. Companies that focus on transparency, compliance and qualification at an early stage secure competitive advantages and strengthen trust in their AI solutions.

FAQ: Frequently asked questions about the EU AI Act

It is an EU law that defines rules and obligations for AI systems, e.g. with regard to security, transparency, risk, responsibility and user rights.

Almost everyone who produces, uses, imports or distributes AI systems in the EU - regardless of whether they are in the private or public sector. The EU AI Act applies not only to large corporations, but also to smaller companies if they use AI that meets certain requirements. Also AI systems developed outside the EU if their system is on the EU market or affects EU citizens.

Providers and operators must ensure that people who work with AI systems have sufficient knowledge and skills. Depending on the context, technical complexity and area of application, training and further education may be required.

These are AI models that were not developed for a specific application, but rather, for example, large language or image models that can be used in many different contexts. Special transparency and documentation obligations apply to these.

The Act distinguishes between different risk classes (e.g. "prohibited practices", "unacceptable risks", "high risks", etc.). Systems that pose particular risks - such as in the areas of health, justice, critical infrastructure, etc. - must comply with stricter requirements, e.g. in terms of transparency. These must meet stricter requirements, e.g. in terms of transparency, monitoring and security.

There are certain applications that are prohibited by law, e.g. AI that manipulates or exploits humans, remote biometric identification under certain conditions, etc.

The EU AI Act has been in the Official Journal since August 1, 2024. Many regulations will take effect in stages: e.g. certain prohibitions and obligations from February 2, 2025, obligations for general purpose AI from August 2, 2025, and the majority of high-risk obligations from August 2, 2026.

Yes, e.g. existing systems that were already in use before certain dates or systems with only a low risk. Research, development or testing prior to market launch are also exempt under certain circumstances.

There is a threat of high fines, in some cases a percentage of annual global turnover, for non-compliance with the EU AI Act. The exact amounts depend on how serious the infringement is.

Control and monitoring lies partly with national authorities and partly with the EU. The EU member states must create their own authorities/mechanisms for this. There is an AI Office at EU level for market surveillance and conformity assessments.

There are overlaps, but also demarcations. The AI Act sets out additional obligations, particularly in areas such as data protection, discrimination and transparency. Some requirements overlap with other areas of law. Anyone using AI often has to comply with several sets of rules or laws.

ALSO INTERESTING