Verifying AI: will Singapore’s experiment with AI governance set the benchmark?

1Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The rise of generative AI programmes like ChatGPT, Gemini, and Midjourney has generated both fascination and apprehension in society. While the possibilities of generative AI seem boundless, concerns about ethical violations, disinformation, and job displacement have ignited anxieties. The Singapore government established the AI Verify Foundation in June 2023 to address these issues in collaboration with major tech companies like Aicadium, Google, IBM, IMDA, Microsoft, Red Hat, and Salesforce, alongside numerous general members. This public-private partnership aims to promote the development and adoption of an open-source testing tool for fostering responsible AI usage through engaging the global community. The foundation also seeks to promote AI testing via education, outreach, and as a neutral platform for collaboration. This initiative reflects a potential governance model for AI that balances public interests with commercial agendas. This article analyses the foundation’s efforts and the AI Verify testing framework and toolkit to identify strengths, potential and limitations to distil key takeaways for establishing practicable solutions for AI governance.

Cite

CITATION STYLE

APA

Lim, S. S., & Chng, G. (2024). Verifying AI: will Singapore’s experiment with AI governance set the benchmark? Communication Research and Practice. Taylor and Francis Ltd. https://doi.org/10.1080/22041451.2024.2346416

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free