The European Regulation on Artificial Intelligence
The European Regulation on Artificial Intelligence: A New Direction for the EU
On 21 May 2024, the European Union (EU) reached a new milestone by adopting the Artificial Intelligence Regulation (AI Act), after four years of intense negotiations. This regulation aims to establish clear rules for the use of artificial intelligence (AI) in the EU, guaranteeing the safety, transparency and accountability of AI systems.
IA Act – A difficult negotiation process
The negotiation process was marked by a great deal of opposition, particularly from lobbyists and AI companies who feared that the regulation could hinder the development of French Tech. However, the European Parliament finally approved the regulation by a large majority of 523 votes out of 618.
IA Act – Stakeholders and supervisory authorities
The regulation essentially concerns five main types of player: suppliers, integrators, importers, distributors and organisations using AI. Each EU Member State is responsible for applying and implementing the regulation within its borders, and must designate a national supervisory authority.
IA Act – A new hierarchy of risks
The regulations define an artificial intelligence system (AIS) as an automated system that is designed to operate at different levels of autonomy and that, from input data, deduces recommendations or decisions that can influence physical or virtual environments. AIS are classified on four levels according to the risk they represent: unacceptable risk, high risk, limited risk and low risk.
IA Act – Cyber security requirements
The regulations set out a number of cybersecurity requirements for high-risk AIS, including:
- Risk management
- Security by design
- Technical documentation
- Data governance
- Keeping registers
- Resilience
- Human surveillance
IA Act – General-purpose AI models
The regulations introduce a new term: general-purpose AI models (GenIA). These are defined as AI models that exhibit significant generality and are capable of competently performing a wide range of distinct tasks. GenIAs must meet specific requirements, particularly in terms of transparency and respect for copyright.
IA Act – Measures to be implemented
If an AIS falls into the high-risk categories, it will have to comply with numerous requirements, particularly in terms of cyber security. The measures to be implemented include setting up adversarial tests, protecting the model’s physical infrastructure and reporting serious incidents.
IA Act – Compliance with the AI Act
To prepare for compliance with the AI Act, you need to follow the risk-based approach required by the legislation. The first step is to make an inventory of your use cases, then classify your AIS by level of risk. The applicable measures will then be identified according to the risk level of the AIS.
IA Act – Next steps
The AI Act is due to come into force in the next few years. Companies and organisations need to prepare for regulatory compliance by putting in place the necessary measures to ensure the security and transparency of their AI systems.
In conclusion, the European regulation on artificial intelligence is an important step towards creating a regulatory framework for the use of AI in the EU. Companies and organisations need to prepare for regulatory compliance by putting in place the necessary measures to ensure the safety and transparency of their AI systems.