The European Union’s parliament made a historic move on Wednesday by approving the world’s inaugural comprehensive regulatory framework to govern artificial intelligence (AI), a domain that stands at the forefront of technological investment.
The EU had brokered a provisional political consensus in early December, which was subsequently endorsed during the Parliament’s session on Wednesday. The approval saw 523 votes in favor, with 46 against and 49 votes not cast.
Thierry Breton, the European Commissioner for the internal market, hailed the development as groundbreaking, stating that “Europe is NOW a global standard-setter in AI.”
Roberta Metsola, President of the European Parliament, echoed Breton’s sentiments, describing the act as trailblazing. She emphasized that it would foster innovation while safeguarding fundamental rights, noting that AI is already deeply embedded in daily life and would now be subject to legislative oversight.
Dragos Tudorache, a lawmaker who played a crucial role in negotiating the agreement within the EU, welcomed the accord but highlighted that the significant challenge lies in its implementation.
Enacted in 2021, the EU AI Act classifies AI technologies into different risk categories, ranging from “unacceptable,” which warrants a ban, to varying levels of high, medium, and low risk. The regulation is anticipated to come into effect at the end of the legislative term in May, following final checks and endorsement from the European Council. Implementation will then be phased in from 2025 onwards.
Previously, some EU member states advocated for self-regulation over government-led restrictions, fearing that stringent regulations might impede Europe’s ability to compete with Chinese and American tech firms. Notable detractors included Germany and France, which host promising AI startups.
The EU has been striving to keep pace with the societal impact of technological advancements and the dominance of key players in the market. Just last week, the Union implemented landmark competition legislation aimed at curbing the power of major U.S. tech companies.
Under the Digital Markets Act, the EU has the authority to address anti-competitive practices and compel tech giants to open up their services to foster competition and user choice.
Concerns have mounted regarding the potential misuse of AI, particularly in the context of deepfakes, which can generate deceptive content such as photos and videos. Governments are particularly wary of these technologies being exploited during crucial global elections.
In response, some AI proponents have taken voluntary measures to combat disinformation. For instance, Google announced restrictions on election-related queries for its Gemini chatbot, a move aimed at curbing the spread of misinformation.
Dragos Tudorache emphasized the significance of the AI Act in placing humans in control of technology, enabling economic growth, societal progress, and the realization of human potential. He underscored that the journey does not end with the enactment of the AI Act but marks the beginning of a new governance model centered around technology.
Legal experts lauded the EU’s initiative as a major milestone in international AI regulation, suggesting that it could serve as a model for other countries. Steven Farmer, an AI specialist at Pillsbury, noted the EU’s track record of leading regulatory efforts, citing the General Data Protection Regulation (GDPR) as a precedent.
Mark Ferguson, a public policy expert at Pinsent Masons, emphasized that businesses must collaborate with lawmakers to navigate the evolving regulatory landscape as technology continues to advance.