AI Governance & Regulation · 2023-06-07 · 06:54
Singapore launches AI Verify open-source testing framework
In Brief
At Asia Tech x Singapore, Singapore launches the AI Verify Foundation, establishing the world's first open-source community for AI governance testing.
Key Takeaways
- Singapore's AI Verify Foundation pulls IBM, Microsoft, Google and cross-sector firms into open-source AI testing.
- The toolkit goes open source, covering transparency, security and accountability and letting companies self-test models.
- Salesforce's Kathy Baxter says the biggest hurdles are change management and the shortage of AI ethicists.
Summary
At Asia Tech x Singapore, the country launched the AI Verify Foundation to bring IBM, Microsoft, Google and a range of cross-sector firms into an open-source community building AI testing tools and shaping international standards. The toolkit is now open source so companies can self-test models and feed results back to IMDA.
Salesforce principal architect of ethical AI practice Kathy Baxter argues ethics has to be embedded at data collection — asking whether the system should exist at all, and whether the data represents everyone the AI will affect. She says transparency is in companies' own interest because it earns trust, regulation or not.
Baxter names two main blockers: change management, since success has to be measured by impact, not revenue; and talent, since most AI developers don't have a responsible-AI background and AI ethicists remain scarce.
Full transcript
Caption language: en · Fetched: 2026-05-02
Singapore is harnessing the collective power of tech Giants to help make artificial intelligence safer for companies and Regulators it's launched a network that Taps the likes of IBM Microsoft and Google to develop AI testing tools for response or use and help shape International AI standards launched at The asiotech X Singapore Summit the AI verify Foundation includes big Tech and companies across different sectors now they'll help Foster an open source community on AI testing and contribute to the made in Singapore AI testing toolkit known as AI verify the move comes as concerns grow over the Technology's increasing popularity and its rapid development we believe in using AI in a responsible way and deploying it for good but we will also strive to Shield Society from the most serious AI risks the private sector and the research ecosystem have Rich expertise they can and must be encouraged to participate meaningfully to advance AI for the public good Mrs 2 adds that making its AI verified testing toolkit open sourced will enable system developers solution providers as well as the research Community use and contribute to it in areas like transparency security and accountability the move is seen as a good way to help companies understand the AI that they use adjust it to better suit their own needs and co-develop new and better testing tools our earlier CNAs heidiang spoke to Kathy Baxter the principal architect of ethical AI practice at Salesforce who emphasize the importance of having ethics at the foundation of building AI you talked about data collection and to to Really integrate these ethical practices into AI processes you really need this kind of data so how what are some of these metrics for ethical data collection and how can organizations ensure ethical practices when they're collecting such data absolutely if you don't have ethics in mind if ethics isn't at the foundation of what you are doing and how you're building your AI it's very difficult to tack it on at the end and actually have an impact so first and foremost asking not just can we do this but should we do this is this a problem that that AI should solve then you think about the data sets that you're using to fuel the model is it representative of everyone that the AI is going to impact what are the potential toxicities and biases that are existing in those data sets so you have to measure those look for them and then take a effort to mitigate them and then as you are building the models you need to get feedback from those that are going to be impacted and you have to take special effort to reach out to underrepresented communities and make sure that they are involved as well so that harms that you might not predict they are able to identify and where are we right now on that front a lot of amazing work has been done the last 10 plus years particularly by women and women of color it's it's really an amazing space to see so there's a lot of really good work that's ha that has already happened and there are a number of resources and standards there's the nist AI risk management framework and then of course Singapore's AI verify toolkit and they announced today the AI verify foundation so there's really some amazing work that's going into making accessible responsible AI available and on the topic of responsible AI you know is the onus on organizations that create these AI systems to make sure that they're transparent what government regulations actually need to be in place to ensure compliance there are there's survey after survey showing that people are they don't have a lot of trust in AI particularly the more powerful Ai and so it's in organization's best interests to ensure that they are transparent because transparency leads to trust and so even if regulations don't require you to do it it really is in an organization's best interest to be transparent and how they're creating and using Ai and how would a company being transparent about their AI systems how would that look like or for a consumer who wants to to use these tools that is a fantastic question because there is transparency in terms of the purchaser at a company of the AI that they may be using or an auditor and then there's the consumer so if you are interviewing for a job in New York for example you have to identify that you're using AI in your hiring process so how do you communicate to a consumer how that's used so being able to communicate in plain language what the model is doing for consumers but then you give more technical detail for Auditors or or purchasing decision makers so it sounds like it's really encouraged for companies to really be transparent about what they're doing with their AI systems and at the same time to make them ethical as well so what are the greater challenges for organizations that are looking to integrate these ethical practices into their processes first and foremost I think for unfortunately many companies it's a matter of change management it's a matter of changing the incentive structures of not just measuring the success of a product or a feature based on how much money it makes but instead evaluating the impact that it has and ensuring that it is bringing more good than harm secondly it's finding people with the right skill sets there are many AI developers but not everyone has the background to know how to build AI responsibly so finding AI ethicists right now there aren't many I expect we're going to see many many people graduating from programs in the coming years because this younger generation really has a great deal of passion and energy for creating technology that works for society
Related Videos
Over 250 AI experts gather in Singapore to set global testing standards
2026-04-20 · CNA · 03:46
Singapore-proposed AI safety testing standards take centre stage at an ISO international meeting, drawing over 250 experts from the US, China, Japan, South Korea and beyond — the working group's first session in ASEAN. Nearly 100 AI standards are now published or in development, triple the count of a year ago.
Josephine Teo on Singapore's AI priorities and online safety safeguards
2026-03-31 · Josephine Teo · 45:00
At a Lorong AI media Q&A, Josephine Teo details bilingual AI talent, agentic AI governance, the National AI Impact Plan, and AI's impact on the workplace.
Josephine Teo on enterprise AI adoption and online safety regulation
2026-03-31 · Josephine Teo · 02:30
Josephine Teo says the government is prepared to intervene if enterprise AI adoption falls short of expected outcomes, while releasing IMDA's second Online Safety Assessment Report.
Singapore's National AI Impact Plan: supporting 10,000 firms and 100,000 workers
2026-03-02 · Josephine Teo · 02:45
Minister Josephine Teo announces the National AI Impact Plan, targeting training of 100,000 AI professionals and support for 10,000 firms by 2029.
Josephine Teo urges nations to proactively address agentic AI governance risks
2026-02-20 · Josephine Teo · 05:12
At the World Economic Forum, Josephine Teo unveils the world's first agentic AI governance framework and calls on nations to proactively shape AI governance rules.
Singapore's view of AI has shifted: Josephine Teo interview
2026-02-11 · Josephine Teo · 22:15
Josephine Teo details Singapore's AI strategic shift — from cautious observation to full embrace — and how the government systematically drives AI adoption.