Compliance with the IA Act

Compliance with the IA Act

06-09-2024 Data/IA Cyber

Compliance with the IA Act: a challenge for businesses

Artificial intelligence (AI) has become an indispensable tool for many businesses, but it also poses ethical and legal challenges. The case of ChatGPT, a generative AI capable of generating text and images at the request of users, is a perfect illustration of these issues. This technology can contribute to abusive or misleading uses, such as manipulating the population by creating fake news, deepfakes or nudges.

In response to these challenges, the European Union has introduced the AI Act, a regulation designed to provide a framework for the use of AI in businesses. But how do you comply with this regulation? What are the obligations of the companies concerned? And what are the risks of non-compliance?

 

Compliance obligations under the IA Act

The obligations of the companies concerned depend on the level of risk associated with their AI system. There are four levels of risk: unacceptable, high risk, low risk and minimal.

  • AI with unacceptable risk: AI systems and models with unacceptable risk are quite simply banned. They cannot therefore be marketed within the European Union. What’s more, if the company is based in the EU, these products are also banned for export.
  • High-risk AI: high-risk AI systems may be placed on the market, provided they are CE marked, have a declaration of conformity and are registered in the EU database.
  • Low-risk AI: if you wish to market a low-risk artificial intelligence system, you simply have an obligation to inform users that the content has been generated by AI.
  • Minimal-risk AI: minimal-risk AI systems and models do not have any specific obligations to comply with. It is simply recommended that a code of conduct applicable to all AIS with similar purposes be put in place and complied with.

 

Examples

  • Credit rating: a bank wants to create a credit rating AI to assess the solvency of credit applicants. To do this, it collects sensitive data from its customers (credit history, income, employment, even health data). The SIA defines the conditions for obtaining a loan. However, this system can lead to discrimination and bias.
  • Artistic deep faking: a video game company uses deep faking to generate images and voices. Information explaining that the image has been generated by an AI must be provided.
  • Anti-spam filter: the anti-spam filter system consists of developing an algorithm that classifies e-mails according to whether they should land in the inbox or in the junk folder. In itself, this AI presents no specific danger to individuals. However, it is still advisable to respect a code of conduct.

 

Steps to compliance

Here are the steps you need to take to comply with the IA Act:

  1. Mapping AI systems: the compliance obligations set out in the IA Act apply to each AI system and not to the company as a whole. It is therefore advisable to map all existing AI systems.
  2. Assess the risk levels of existing and planned AI systems and models: each level of risk has its own compliance requirement. You therefore need to assess the level of risk according to criteria such as the sector of activity, the use cases, the power of the AI model and the types of data used.
  3. Classify AI systems according to risk levels: once AI systems have been assessed, you need to classify them according to their level of risk: minimal, low, high risk or unacceptable.
  4. Bringing a high-risk system into compliance: if the system is high-risk, it must be brought into compliance by following the following steps: setting up a risk management system, validating the quality and non-discrimination of the data sets feeding the system, assessing accuracy, robustness and guaranteeing a level of cybersecurity, guaranteeing human control, complying with the obligation to provide information and transparency, accompanying AI systems with a register of activities, drawing up technical documentation, proceeding with the declaration of conformity, ensuring CE marking, proceeding with registration.

 

Penalties for non-compliance

It all depends on the level of risk:

  • AI at unacceptable risk: marketing or allowing AI at unacceptable risk on the EU internal market can be punished by a fine of €35 million or 7% of worldwide annual turnover, whichever is greater.
  • High-risk AI: failure to comply with the declaration, CE marking and registration requirements is punishable by a fine of €15 million or 3% of worldwide annual turnover, whichever is greater.
  • Low-risk AI: failure to comply with the obligation to provide information and transparency is punishable by a fine of €7.5 million or 1% of worldwide annual turnover, whichever is greater.
  • Minimal-risk AI: the AI Act makes no provision for sanctions, since the adoption of a code of conduct is voluntary.

In conclusion, compliance with the IA Act is a challenge for businesses, but it is possible to achieve compliance by following the established steps. It is important to note that the penalties for non-compliance can be severe, so it is crucial to take the necessary steps to avoid these penalties.