
European Artificial Intelligence Act Comes into Force
- Europe and Arabs
- Friday , 2 August 2024 6:19 AM GMT
Brussels: Europe and the Arabs
According to a statement issued by the European Commission headquarters in Brussels the European Artificial Intelligence Act (AI Act), the world’s first comprehensive regulation on artificial intelligence, has come into force. The AI Act is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people’s fundamental rights. The Act aims to create a harmonious internal market for AI in the EU, encourage the adoption of this technology and create a supportive environment for innovation and investment.
The AI Act provides a forward-looking definition of AI, based on the EU’s product safety and risk-based approach:
Minimum risk: Most AI systems, such as AI-powered recommender systems and spam filters, fall into this category. These systems do not face any obligations under the AI Act due to their minimal risks to citizens’ rights and safety. Companies can voluntarily adopt additional codes of conduct.
Specific transparency risks: AI systems such as chatbots must clearly disclose to users that they are interacting with a machine. Some AI-generated content, including deepfakes, must be labelled as such, and users must be informed when biometric labelling or emotion recognition systems are used. In addition, providers will have to design systems in such a way that synthetic audio, video, text and image content is marked in a machine-readable format and can be detected as being artificially generated or manipulated.
High risk: AI systems identified as high risk will have to comply with strict requirements, including risk mitigation systems, high quality datasets, activity logging, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. High risk AI systems include, for example, AI systems used in recruitment, to assess whether someone is eligible for a loan, or to operate autonomous robots.
Unacceptable risk: AI systems that pose a clear threat to people’s fundamental rights will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as games that use voice assistants that encourage dangerous behaviour by minors, systems that allow “social scoring” by governments or companies, and certain applications for predictive policing. In addition, certain uses of biometric systems will be prohibited, for example emotion recognition systems used in the workplace and certain systems for classifying people or real-time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).
To complement this regime, the AI Act also introduces rules for so-called general-purpose AI models, which are highly capable AI models designed to perform a wide range of tasks such as generating human-like text. General-purpose AI models are increasingly being used as components of AI applications. The AI Act will ensure transparency along the value chain and address potential systemic risks of the most capable models.
Implementation and enforcement of AI rules
Member states have until 2 August 2025 to designate competent national authorities, who will oversee the application of AI rules and carry out market surveillance activities. The Commission’s AI Office will be the lead body for the implementation of the AI law at EU level, as well as the enforcer of the rules on general-purpose AI models.
Three advisory bodies will support the implementation of the rules. The European AI Council will ensure a uniform application of the AI law across EU Member States and will act as the main body for cooperation between the Commission and Member States. A scientific committee of independent experts will provide technical advice and input on implementation. In particular, this committee can issue alerts to the AI Office on risks associated with general-purpose AI models. The AI Office can also receive guidance from an advisory forum composed of a variety of stakeholders.
Companies that do not comply with the rules will be fined. Fines can be up to 7% of annual global turnover for violations of prohibited AI applications, up to 3% for violations of other obligations and up to 1.5% for providing incorrect information.
Most of the AI Act’s rules will come into force on 2 August 2026. However, the ban on AI systems deemed to pose an unacceptable risk will already come into force six months later, while the rules on so-called general-purpose AI models will come into force after 12 months.
To bridge the gap between the transition period and full implementation, the Commission has launched the AI Charter. This initiative calls on AI developers to voluntarily adopt the key commitments of the AI Act ahead of the legal deadlines.
The Commission is also developing guidelines to define and detail how the AI Act will be implemented and facilitate common regulatory instruments.
On 9 December 2023, the Commission welcomed the political agreement on the AI Act. On 24 January 2024, the Commission launched a package of measures to support European startups and SMEs in developing trustworthy AI. On 29 May 2024, the Commission unveiled AI Office. On 9 July 2024, the amended EuroHPC JU regulation entered into force, allowing the creation of AI factories. This allows the use of supercomputers dedicated to AI to train general purpose AI models (GPAI).
No Comments Found