Since August 1, 2024, the EU AI Act (EU Regulation 2024/1689) has been the binding legal framework for the use of artificial intelligence (AI) in the European Union. The aim of the regulation is to promote safe, transparent and human-centered AI systems – particularly in sensitive areas such as justice, healthcare and human resources management.

Who is affected?
The regulations apply to all companies that develop, distribute or use AI systems - regardless of whether they are based in the EU or not.
What are the key requirements of the EU AI Act?
- Prohibited AI applications: Systems for manipulative influencing, social scoring or real-time monitoring in public spaces are prohibited.
- High-risk AI: Applications that affect fundamental rights or public safety are subject to strict testing, documentation and transparency obligations.
- Generative AI: Content created by generative AI must be clearly labeled as AI-generated.
- Obligation to qualify: Companies must ensure that all employees who work with AI have sufficient specialist knowledge.
- Market surveillance & sanctions: Compliance is monitored by official controls. Violations can result in high fines.
Implementation in practice
The first implementation phase has been running since February 2025. Companies are obliged to adapt their internal processes, systems and training to the new requirements. Especially in high-risk areas, an in-depth, application-related understanding of the functioning, risks and limits of AI is required.
Opportunities for companies
The EU AI Act takes a balanced approach: it is intended to enable innovation while protecting consumer rights and ethical standards. Companies that focus on transparency, compliance and qualification at an early stage secure competitive advantages and strengthen trust in their AI solutions.







