The European Union's AI Act has finally crossed the finish line
The European Council has approved the AI Act, which will establish comprehensive, risk-based rules governing the development and use of artificial intelligence systems across the EU, aiming to ensure AI is safe and trustworthy while protecting fundamental rights and promoting innovation.
In a groundbreaking move, the Council of the European Union gave the final vote of approval to the AI Act this Tuesday. The AI Act is a first-of-its-kind legislation poised to set a global standard with its risk-based approach to AI regulation. The AI Act's primary purpose is to ensure that public and private players develop and adopt AI safely and in a way that is respectful of citizens across the European Union. The risk-based approach operates by assigning stricter rules for compliance for AI research and applications that pose a higher potential risk to society. The AI Act applies in the same areas where EU legislation is applicable, although it does make exemptions for military, defense, and research purposes.
Broadly, the regulation recognizes three categories of risk. Systems implying limited risk will only be subject to light requirements, mostly about transparency. High-risk systems, on the other hand, will have their authorization conditional upon the satisfaction of a stricter set of requirements. Finally, the AI Act also makes provisions for banned AI systems applications, including those used for cognitive behavioral manipulation, social scoring, predictive policing based on profiling, and those that analyze biometric data to categorize people according to categories set by race, religion, or social orientation. The legislation considers general-purpose AI (GPAI) models separately, although they are also evaluated according to the level of risk they pose.
Enforcement of the AI Act will be ensured by a new governance architecture composed of several bodies, including an AI Office within the Commission, a scientific panel of independent experts, an AI Board with representatives of the member states which will assist and advise the Commission and the member states on matters related to the effective application of the legislation, and an advisory forum for stakeholders providing technical expertise to the other governing bodies. Moreover, the AI Act details a system of fines as a penalization for failure to comply. In most cases, the penalties are either a set percentage of the company's annualized revenue for the previous financial year or a previously established amount, whichever is higher. Small and medium-sized enterprises and startups are allowed to pay proportional administrative fines.
Concerning citizens' rights protection, the AI Act establishes that no entity offering public services can adopt an AI system without performing a fundamental rights impact assessment first, especially if the system in question may pose a high risk. In particular, the latter systems and the public service providers operating them must be registered in an EU-wide database of high-risk systems. Additionally, entities using emotion recognition systems must inform individuals whenever they are exposed to the system. To protect innovation in the face of these new regulations, the AI Act considers the creation of AI regulatory sandboxes enabling the testing, development, and validation of new AI systems within a controlled environment, in addition to allowing for real-world conditions testing.
Once signed by the European Parliament and Council presidents, the legislation must appear in the EU’s Official Journal shortly and will enter into force twenty days after publication. The AI Act can be applied two years after its enforcement, although the Act specifies some exceptions.