Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

The EU's AI Act officially came into force on August 1

The EU's AI Act, which came into force on August 1, 2024, introduces a risk-based classification system for AI applications. In preparation for the full enforcement of the AI Act, the European Commission has launched a consultation for the first General-Purpose AI Code of Practice.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
The EU's AI Act officially came into force on August 1
Photo by Christian Lue / Unsplash

The European Union's AI Act has been in force since Thursday, August 1, 2024. Most of its rules will not be enacted until August 2, 2026. However, two salient exceptions are the rules concerning the ban of AI applications posing an unacceptable risk, which will apply in six months, and the regulations for General-Purpose AI models, which will begin being enforced in 12 months. To bridge the period between the AI Act's coming into force and its full implementation, the European Commission launched an AI Pact to encourage the early compliance of the AI Act's requirements.

The AI Pact's first call of interest was released in November 2023 and received replies from hundreds of organizations. The Pact's approach is based on two pillars, the first of which is a network of interested organizations to collaborate and share knowledge and experiences. Within this framework, the Commission's AI Office organizes workshops to improve organizations' understanding of the AI Act and their specific responsibilities. In turn, the AI Office receives key feedback from the organizations on the best practices they develop and the challenges faced as they work towards compliance. The second pillar consists of supporting organizations willing to make pledges or early commitments to actions that lead to meeting the AI Act's requirements and having them report on their pledges regularly.

Broadly, the AI Act classifies AI systems based on the level of risk each poses. Most non-generative AI applications, like recommender systems or spam filters, are in this category and are not subject to additional obligations. Generative AI applications are not deemed high-risk per se, but many, including chatbots and media generators, belong to a "Specific transparency risk" category, which means that most obligations at this level involve letting users know that they are interacting with a machine, or that they are being subject to emotion recognition or biometric categorization systems. This level is also concerned with deep fakes and other synthetic AI-generated media labeling in a machine-readable and easily detectable way.

High-risk applications include systems for recruitment, financial decision-making, and autonomous robots. These applications will be subject to strict requirements outlined in the Act. Finally, Unacceptable risk applications, which are banned under the AI Act with some narrow exceptions, include systems manipulating human behavior, systems that enable social scoring by the government or other organizations, predictive policing systems, and real-time remote biometric identification for law enforcement purposes.

A set of rules for general-purpose AI models will complement the risk-based categorization system outlined in the AI Act. General-purpose AI models are, for instance, the proprietary and open foundation models capable of generating text, visual media, and audio, and which power many popular generative AI applications, including chatbots and online media generators. A few days before the AI Act's entrance into force, the European Commission published a consultation on the topics covered by the first General-Purpose AI Code of Practice, which will be in preparation between September 2024 and April 2025 and will spell out the rules that will bind general-purpose AI model providers.

The consultation consists of a questionnaire, available in English and submittable until September 10, 2024. The AI Office plans to draw on the answers and submissions related to the targeted questions of the consultation to prepare a draft of the Code of Practice and to publish a summary of the results.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Updated

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More