The European Commission has taken a significant step towards regulating artificial intelligence (AI) by initiating the development of the first-ever General-Purpose AI Code of Practice. This Code is closely linked to the recently passed EU AI Act and aims to establish clear guidelines for AI models such as ChatGPT and Google Gemini, particularly in areas like transparency, copyright, and risk management.
Nearly 1,000 experts from academia, industry, and civil society recently convened online to contribute to shaping this Code. Leading the process are 13 international experts, including renowned AI researcher Yoshua Bengio, who is spearheading the group focusing on technical risks. Bengio, a recipient of the prestigious Turing Award, is known for his cautious stance on the potential catastrophic risks posed by powerful AI systems.
The working groups responsible for drafting the Code will convene regularly with the goal of finalizing the document by April 2025. Once completed, the Code will have a significant impact on companies seeking to deploy AI products in the EU. While the EU AI Act provides a regulatory framework for AI providers, the Code of Practice will serve as a practical guide for compliance.
Key issues addressed in the Code will include enhancing transparency in AI systems, ensuring compliance with copyright laws, and implementing measures to mitigate risks associated with AI technologies. The drafting teams face the challenge of striking a balance between promoting responsible and safe AI development while fostering innovation, a delicate task that has already drawn criticism towards the EU’s regulatory approach.
The implications of the Code’s implementation are far-reaching. If executed effectively, it could establish global standards for AI safety and ethics, positioning the EU as a leader in AI regulation. However, overly restrictive or ambiguous guidelines could hinder AI development in Europe, potentially driving innovators to seek opportunities elsewhere.
While the EU may aspire to see its Code adopted globally, countries like China and the US are perceived to prioritize AI development over risk aversion. The recent veto of California’s SB 1047 AI safety bill exemplifies the divergent approaches to AI regulation across regions. Although the EU may not be the birthplace of Artificial General Intelligence (AGI), it is also less likely to be the epicenter of any potential AI-related catastrophe.
In conclusion, the EU’s initiative to establish an AI Code of Practice reflects a proactive approach towards balancing innovation and safety in the rapidly evolving AI landscape. The outcome of this endeavor will not only shape the future of AI regulation in Europe but also influence global standards for AI ethics and safety.
