European Parliament and Council representatives recently underwent 40 hours of final negotiations on the Artificial Intelligence Act. This bill is an effort to ensure that AI applications in Europe (and potentially the rest of the world) are safe, environmentally sustainable, and respectful of democracy and fundamental rights while still fostering innovation and the development of businesses. In particular, the AI Act is intended to mitigate risks inherent to the most powerful AI systems in particularly vulnerable areas such as healthcare, education, insurance, banking, surveillance, and public safety. The bill also imposes a ban on applications that pose an unnecessary risk and sets monetary fines as a penalty for non-compliance.
Banned applications in the European Union include biometric categorization systems that use sensitive characteristics (i.e. race, sexual orientation, religious or political affiliations), untargeted scraping of facial images to create facial recognition databases, predictive policing, emotion recognition in workplaces and educational institutions, social scoring and any AI system aimed at manipulating the behavior of people or taking advantage of their vulnerabilities due to age, disability or socioeconomic background. However, the AI Act will allow narrow, targeted use of biometric recognition systems in emergencies, such as the targeted localization of victims, the prevention of terrorist threats, and the targeted localization of suspects of engagement in a specific crime listed in the regulation. Law enforcement entities can also deploy high-risk systems not yet subjected to conformity tests in an emergency. The strict ban on biometric identification systems (RBI) comes after a fierce debate on regulating RBI in public spaces, especially considering the concern that RBIs could lead to mass surveillance.
General-purpose AI systems (GPAI) and the 'foundation models' they are based on are now subject to legally binding transparency, safety, and sustainability requirements. Among other requirements, the companies behind these models are now under obligation to publish technical documentation, comply with EU copyright laws, and release detailed summaries on training data. Transparency regulations include full disclosure on whether people interact with a chatbot, biometric categorization, or emotion recognition systems. Tech companies will also be responsible for labeling deep fakes and AI-generated content and have to design AI systems in ways that allow the recognition of AI-generated media.
Moreover, systems and models with a higher impact will be subject to even stricter rules that include conducting model evaluations and adversarial testing, in addition to reporting on their energy efficiency. However, the MIT Technology Review reports that EU officials would not confirm if models such as GPT-4 or Gemini would fall into the high-impact category since it is currently up to the companies to determine whether they should adhere to the stricter set of regulations. Organizations that provide services in sensitive areas such as insurance and banking will also see an increase in their responsibilities since they will be required to perform impact assessments of their use of AI on people's fundamental rights.
Finally, the AI Act includes measures to foster innovation and protect small and medium-sized enterprises (SMEs). These measures include promoting regulatory sandboxes and testing simulating real-world conditions to ensure the safety of AI systems before their placement in the market. Another measure concerns the inclusion of limited and specified derogations meant to support SMEs and alleviate the administrative burden that places them at a disadvantage concerning their bigger competitors.
A new European AI Office advised by a scientific panel of independent experts will oversee the AI Act compliance. This panel will contribute new evaluation methodologies, advise on classifying high-impact models, and monitor the possible material risks associated with foundation models. The AI Board will be composed of representatives from the Member States and will still function as a coordination platform and advisory body. An advisory forum for stakeholders will provide technical advice to the AI Board.
Non-compliance with regulations leads to fines calculated as a set quantity or a percentage of the company's global annual turnover from the previous financial year, whichever is higher. The current penalties are 35 million EUR or 7% turnover for banned AI applications, 15 million or 3% for obligation violations, and 7.5 million or 1.5% for supplying incorrect information. The bill also contemplates proportionate caps for SMEs and startups. The provisional agreement states that the AI Act should be applied two years after it enters force. For the agreement to become regulation, the Parliament and Council must endorse the entire agreement text. The Parliament’s Internal Market and Civil Liberties committees will soon vote on this agreement, as will the Committee of Permanent Representatives.