On Friday, the European Union reached a historic milestone by establishing comprehensive regulations for artificial intelligence (AI), marking a significant development as the first major regulatory framework for this emerging technology in the Western world. Throughout the week, key EU institutions engaged in extensive discussions to formulate proposals and ultimately achieve a consensus. Crucial aspects of the deliberations included determining the regulatory approach towards generative AI models, which are instrumental in the creation of tools like ChatGPT, as well as addressing concerns related to the use of biometric identification tools, such as facial recognition and fingerprint scanning.
Germany, France, and Italy have taken a stance against the direct regulation of generative AI models, specifically referred to as “foundation models.” Instead, they advocate for a self-regulatory approach by the companies developing these models, facilitated through government-imposed codes of conduct. The primary apprehension driving this preference is the fear that overly stringent regulations could impede Europe’s competitiveness against tech giants from China and the United States. Notably, Germany and France house some of the most promising AI startups in Europe, such as DeepL and Mistral AI.
The EU AI Act represents a groundbreaking initiative, being the first legislation specifically tailored to address the challenges posed by artificial intelligence (AI). This landmark law is the culmination of years of European endeavors to effectively regulate AI technology. Its inception dates back to 2021 when the European Commission initially put forth a proposal for a comprehensive regulatory and legal framework governing AI. The legislation categorizes AI into various risk levels, ranging from “unacceptable,” which calls for outright bans on certain technologies, to high, medium, and low-risk classifications. The emergence of generative AI as a prominent subject gained momentum late last year, notably with the public release of OpenAI’s ChatGPT. This event occurred subsequent to the initial 2021 EU proposals and prompted lawmakers to reconsider and adjust their approach.
AI experts and regulators were taken aback by the unexpected capabilities of ChatGPT and other generative AI tools such as Stable Diffusion, Google’s Bard, and Anthropic’s Claude. These tools demonstrated the capacity to produce sophisticated and remarkably humanlike output from straightforward queries, leveraging extensive datasets. However, they have faced criticism for their potential to disrupt employment, generate discriminatory language, and raise privacy concerns.