The European Parliament has approved the AI Act, a groundbreaking regulation on Artificial Intelligence. Learn about the key points, timeline, and global implications of this significant legislative initiative.
On June 14th, the European Parliament gave its approval to the proposed AI Act, marking a significant milestone in the regulation of Artificial Intelligence. This article provides an overview of the latest updates, including the timeline for implementation and the global impact of this groundbreaking legislation.
Key Points of the AI Act
- Prohibition of high-risk AI systems that pose a threat to people’s safety.
- Expansion of the high-risk classification to include health, safety, fundamental rights, environmental risks, and AI systems influencing elections and large social platforms.
- Additional transparency requirements for Generative Foundation models like ChatGPT, necessitating the disclosure of AI-generated content.
- Promotion of innovation through research exemptions and the establishment of “regulatory sandboxes” for AI testing.
- Strengthening of citizens’ rights to lodge complaints about AI systems and receive explanations for decisions made by high-risk AI.
Timeline for Implementation
According to the current draft, consisting of 85 articles, the AI Act will be fully applicable 24 months after its entry into force, which is expected in 2024. However, considering the rapid evolution in the AI sector, there is a possibility of fast-tracking the process by the legislator.
European Union at the Forefront of AI Legislation
The approval of the AI Act places the European Union at the forefront of AI legislation globally. Policymakers’ interest in AI has been increasing, as demonstrated by the significant growth in the number of bills related to AI passed into law worldwide. The legislative records of 127 countries reveal that the number of such bills increased from 1 in 2016 to 37 in 2022. Additionally, parliamentary records from 81 countries show a nearly 6.5-fold increase in mentions of AI in global legislative proceedings since 2016.
Rise in AI Misuse and Security Concerns
The Stanford University AI Index Report 2023 highlights a rapid increase in incidents related to the ethical misuse of AI. The AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) database shows a 26-fold rise in AI incidents and controversies since 2012. Notable incidents in 2022 included deepfake videos and call-monitoring technology misuse. The growing number of incidents underscores the need for comprehensive regulations to ensure security, privacy, transparency, and non-discrimination in AI usage.
Global Implications and Technological Supremacy
The AI Act has the potential to become a benchmark not only within the EU but also globally, similar to the impact of the General Data Protection Regulation (GDPR). As major superpowers race to achieve technological supremacy, regulatory standards will likely become a part of the competition. Stanford University data further reinforces the increasing interest of policymakers in AI, signaling the importance of setting robust regulations in this epochal turning point.
The approval of the AI Act by the European Parliament marks a significant step towards comprehensive regulation of Artificial Intelligence. With key points addressing high-risk AI, transparency, citizen rights, and innovation, the Act positions the European Union as a leader in AI legislation. As the world enters a new era of technological competition, the AI Act’s implementation could have far-reaching global consequences.