Artificial Intelligence: Between Innovation and Regulatory Framework
Companies involved in generative artificial intelligence now have a reference document to help them prepare for the requirements of the AI Act. Originally expected in May, the best practice guide for so-called “generalist” models was finally published by the European Commission on Thursday, July 10. This slight delay in no way detracts from its importance: the document, drafted by experts, clarifies, point by point, the new rules that companies will have to comply with starting August 2.
This applies to all organizations developing or deploying AI models, including major players in the sector such as ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google DeepMind), and Copilot (Microsoft). For these organizations, compliance with the AI Act is becoming an essential part of their deployment strategy in Europe.
Adopted in 2024 and phased in starting in 2025, the AI Act is the world’s first comprehensive legislative framework to regulate the use of artificial intelligence. Led by the European Commission, this landmark legislation aims to foster trust in AI, protecting fundamental rights while promoting innovation.
In June 2025, the European Union published a guide for companies1. Objective: to help them understand their obligations, achieve compliance, and deploy responsible AI systems that comply with European law.
A risk-based approach: the cornerstone of the European system
The AI Act is based on a classification of AI systems according to their level of risk:
- Unacceptable risk: prohibited uses (e.g., social scoring, behavioral manipulation, real-time facial recognition without a legal basis).
- High risk: AI in sensitive sectors (health, education, employment, justice, law enforcement), subject to strict requirements (auditing, documentation, human oversight).
- Limited risk: conversational systems, chatbots, or deepfakes simply need to indicate that they are artificial.
- Minimal risk: video games, product recommendations, with usage subject to GDPR compliance.
This classification allows for a phased approach, proportionate to the potential impact of AI systems.2.
Operational guidelines for companies
The application guide published by the European Commission provides practical tools to help companies comply with the AI Act:
- Risk-based compliance checklists,
- Technical and impact documentation templates,
- Sector-specific recommendations (healthcare, HR, marketing, manufacturing),
- Examples of best practices and anonymized use cases.
Particular attention is paid to risk assessment, the principle of transparency, data traceability, and human involvement in the decision-making process.
Which companies are affected?
All companies that design, deploy, or integrate AI systems as part of their operations within the European Union are affected, including:
- AI system providers,
- professional users,
- technology integrators,
- non-European companies targeting the EU market.
Special arrangements have been put in place for SMEs and startups to ensure that innovation is not hindered: support from innovation hubs, simplified guidelines, and technical and legal assistance.
What skills should be utilized?
The entry into force of the AI Act transforms the governance of AI in business:
- Lawyers and Data Protection Officers (DPOs) will be required to ensure compliance with the law.
- Data scientists and developers will need to develop systems that are well-documented, auditable, and traceable.
- Risk and compliance managers will oversee the impact assessment.
New professions are emerging: algorithm auditor, AI impact assessor, and advisor on the ethical alignment of AI systems.
The ethical and strategic challenges of the European framework
The AI Act is more than just a legal framework: it embodies a political and ethical vision of artificial intelligence. That of an AI:
- explainable,
- non-discriminatory,
- controllable by humans,
- respectful of fundamental rights.
This system of trust can become a competitive advantage for European companies: those that adapt quickly will be seen as ethically credible in a global market seeking benchmarks.
Is the AI Act the beginning of an era of active regulation of artificial intelligence?
With the AI Act, Europe becomes the first region to establish a clear and binding framework for the use of artificial intelligence. This regulatory initiative is accompanied by a commitment to engage in dialogue with companies, researchers, and civil society.
Other countries (Canada, Brazil, Japan) are following suit. The AI Act could thus become a global standard. But its success will depend on how it is implemented in practice: it is up to each organization to adapt it to its own context, so that AI becomes a tool for trust and progress, rather than a source of imbalance.3.
References
1. European Commission. (2025). Practical guidance for compliance with the AI Act.
https://digital-strategy.ec.europa.eu/
2. AI Watch. (2024). Understanding the EU’s risk-based approach to AI regulation.
https://ai-watch.ec.europa.eu/
3. Future of Life Institute. (2025). How the AI Act is shaping international governance.
https://futureoflife.org/

