EU Greenlights World’s First Major AI Law
The European Union first submitted its proposal for a regulatory framework for artificial intelligence in 2021. Since then, AI’s popularity has grown tremendously, especially after the introduction of generative AI technologies like ChatGPT. The EU’s determination to regulate AI has finally paid off after member states gave their final agreement to what has been termed the world’s first major law for regulating AI. According to statements from the EU Council, the AI Act has now been approved and features a comprehensive set of rules surrounding AI. Here’s an overview of the AI Act, including what it means for developers and consumers, but first a look at the fast rise of the technology:
The Fast Adoption of AI Technology
The new AI Act was designed to create an environment where artificial intelligence technology can thrive without posing a risk to users. AI has seen widespread adoption over the past couple of years and impacted various sectors. For instance, AI algorithms analyze security data in gaming websites to curb digital threats and breaches so players can enjoy online casino blackjack, roulette, poker, and other real money games. AI also recognizes player patterns and transaction data to detect suspicious activity. The technology allows casinos to personalize the games, bonuses, and experience while protecting player accounts from unauthorized access.
Artificial intelligence has many other applications, including in online shopping and advertising, search engine summaries and web searches, digital personal assistance, translations, smart homes, autonomous cars, and cybersecurity. The introduction of generative AI resulted in many businesses using the technology to produce web content and students for research and coursework. As the technology evolves its applications continue to expand from simple tasks to more complex generations. AI technologies like machine language, LLMs, and natural language processing also help with task automation, analytics, diagnosis, and troubleshooting.
EU Artificial Intelligence Act Overview
European Union member states approved the AI Act, making it the world’s first major law regulating AI. This news was broken by the EU Council, which confirmed the groundbreaking act. The news was received well by various countries, with the Belgian secretary of state for digitization dubbing the adoption of the act a “significant milestone for the EU.” Like most tech acts, the AI Act follows a risk-based approach that treats AI applications based on the risk posed to society. The AI Act emphasizes the importance of transparency, trust, and accountability while providing frameworks that allow the tech to flourish and bolster innovation in the EU.
According to the newly approved act, the EU prohibits all unacceptable AI systems in terms of the risk level posed. Unacceptable risk AI tools pose a threat to people and include systems that result in cognitive behavioral manipulation of people and vulnerable groups and social scoring programs that rank people based on aggregated data analysis. Predictive policing and emotional recognition in schools and workplaces are also unacceptable. EU also classifies high-risk AI systems covering autonomous vehicles, medical devices, financial services, education, and other areas where there’s a potential of risk of bias embedded in AI algorithms.
What the AI Act Means for Tech Firms
After the EU Commission confirmed approval of the new AI Act, tech firms were keen to identify how the rules impacted business. The act has major implications for big tech companies and any entity that develops, creates, uses, or resells AI in the European Union. Most US tech firms were keen on the developing law, which is designed to address the advanced capabilities of generative AI. With the act now approved, tech firms must comply with all the rules or risk being fined up to 35 million Euros or $38 million. The Commission will also have the power to fine noncompliant companies 7% of their annual global revenues if higher than the $38 million.
Abiding by the new rules involves respecting the EU copyright law, including transparency disclosures pertaining to how general-purpose AI models are trained. AI tech developers must also provide regular testing and cybersecurity protection. The rules in the AI Act won’t be enforced for about 12 months, which is all the time tech firms and other entities have to bring their systems up to speed with the new restrictions on general-purpose AI. Commercially available AI systems like ChatGPT, Gemini, Copilot, and others will have 36 months to make sure their technology is compliant with the legislation. According to the EU Commission, the act is now a reality and what remains is to implement and enforce it effectively.
What’s Next for AI Regulation
The EU AI Act is only the first of such laws as the world hurries to come up with new regulations for the first-growing technology. Most regions, including the US, Canada, and China, are yet to create a federal law with comprehensive policies targeting artificial intelligence. However, regulatory practices offer general applications that overlap with the AI sector. For instance, consumer protection laws mandate businesses to apply fair and transparent business practices when dealing with customer data, while traffic safety regulations extend to AI-powered tech and autonomous cars. The new EU AI Act provides a model that other jurisdictions can use to centralize their AI regulations, with most experts predicting more regulations worldwide.
You can also read:
- Artificial Intelligence for Marketing Purposes: New Opportunities for Lawyers
- 10 Amazing Advances in Artificial Organs in the Last Decade
- This Artificial Leaf Could Absorb CO2 and Turn it Into Fuel
- 20 Surprising Things That Artificial Intelligence Can Already Do