Singapore has unveiled a preliminary governance model for generative artificial intelligence (GenAI), citing its necessity to tackle new challenges such as incident reporting and tracking the origins of content.
This proposed structure expands upon Singapore's current AI governance model, initially introduced in 2019 and last revised in 2020.
According to a joint statement by the AI Verify Foundation and Infocomm Media Development Authority (IMDA), GenAI holds the promise of exceeding the capabilities of conventional artificial intelligence, yet it also presents inherent risks.
Singaporean government agencies acknowledge a growing international agreement on the need for uniform principles to ensure the safe and confident application of GenAI. They stated, "AI's use and influence transcend national boundaries. Our proposed framework is designed to spark global discussions among policymakers, the industry, and researchers, aiming for worldwide trust in AI development."
Singapore EDB will showcase its 2024 strategy at AIAI San Jose, and Generative AI Summit New York.
Local? Join them there.
Key aspects of the proposed framework
The draft framework incorporates suggestions from an IMDA discussion paper released last June. This paper identified six GenAI-associated risks, including illusions, copyright issues, and inherent biases, and outlined approaches to address them. The draft also benefits from previous initiatives, like a catalog for assessing GenAI model safety and results from tests in an evaluation sandbox.
It addresses nine essential areas identified by Singapore as crucial for fostering a trusted AI ecosystem. These areas center on principles ensuring AI-driven decisions are understandable, transparent, and equitable. The framework also provides actionable advice for AI model developers and policymakers, as noted by IMDA and AI Verify.
Content source and security considerations
One key aspect concerns content source, emphasizing the need for clarity on the origins and creation methods of online content. This is crucial due to the ease of generating AI-created content, like deepfakes, which could amplify misinformation, as pointed out by the Singaporean agencies.
The framework proposes collaboration with publishers, including social media and media outlets, to embed and display digital watermarks and other origin details securely and effectively, minimizing risks of bypass.
Another central aspect is security, as GenAI introduces new threats, such as prompt attacks via the model architecture. These can allow malicious parties to steal sensitive data or model weights.
The framework calls for new security measures, including tools for moderating inputs to detect unsafe prompts and digital forensics tools for investigating and reconstructing cybersecurity incidents involving GenAI.
The agencies emphasize the importance of balancing user protection with innovation. They note ongoing international discussions on related issues like accountability, copyright, and misinformation.
These complex, interconnected issues require a practical, comprehensive approach, as no single solution will suffice. As AI governance is still an evolving field, international consensus-building is vital. Singapore is actively working with other governments, like the U.S., to harmonize their AI governance frameworks.
Learn more about the challenges of regulating generative AI: