In a recent development, informed by six anonymous insiders and reported by Reuters on December 1, there's a significant upheaval in the European Union's regulatory circles concerning the safety and regulation of generative AI.
Regulatory conundrum
EU officials are reportedly struggling to reach a consensus on effective regulatory measures for generative AI. This emerging technology, as highlighted by the sources, poses a significant challenge to the EU’s AI Safety Act.
Initiated in 2021, the AI Safety Act represents a pioneering regulatory framework for AI, adopting a risk-based approach. This framework intends to regulate different AI models individually, based on their specific risk levels.
Current regulatory stance
Presently, the European Parliament has outlined three primary conditions for generative AI to be considered compliant:
- Transparent disclosure of the training data utilized in developing the AI model.
- Documentation summarizing all copyrighted materials used in training.
- Ensuring the AI design actively prevents the creation of offensive or unethical content.
Upcoming discussions and ongoing deliberations
EU regulators are set to convene on December 6 to deliberate further on generative AIregulations. Despite the AI Safety Act receiving the European Parliament's endorsement in June, the specifics of its regulations are still being negotiated between the European Parliament and the European Commission.
Academic insight
A Stanford University study evaluated mainstream generative AI tools, including ChatGPT and Meta’s Llama, for compliance with the EU’s generative AI regulations. The study found that none of the assessed generative AI tools fully adhered to these regulations, with the disclosure of training data, particularly regarding copyrighted material, being a significant shortfall.
Future projections
GlobalData, in its 2023 thematic intelligence report on tech regulation, forecasts that the EU’s AI Safety Act might not be enforceable until late 2023 or 2024. Post-enforcement, there is likely to be a 24-month grace period.
The report also notes the urgency of regulating AI to mitigate risks like misinformation, algorithmic bias, data privacy, and cybersecurity issues. It observes that few regulatory bodies have taken decisive steps in this direction, leaving the technology largely self-regulated by major tech companies.
With the European Parliament elections looming in June 2024, the anonymous sources expressed concerns to Reuters that if the regulatory discussions on generative AI don't progress, the entire AI Safety Act might be at risk of being sidelined during the elections due to time constraints.