AI Strategy & Vision · 2026-03-31 · 45:00
Josephine Teo on Singapore's AI priorities and online safety safeguards
In Brief
At a Lorong AI media Q&A, Josephine Teo details bilingual AI talent, agentic AI governance, the National AI Impact Plan, and AI's impact on the workplace.
Key Takeaways
- Bilingual AI is defined per profession — the minimum AI vocabulary an accountant needs differs from a lawyer's.
- Small businesses need IKEA-style AI tools — pre-tested, packaged for self-assembly.
- A National AI Council chaired by the PM, with six ministers, frames regulation as 'speed with safety'.
Summary
At a Lorong AI media Q&A, Josephine Teo unpacks the bilingual AI talent strategy. She uses minimum vocabulary in language learning as the analogy — about 4,000 to 5,000 English words, 2,000 to 2,500 Chinese characters — and argues the minimum AI vocabulary differs by profession. Singapore is working with ISCA, the Singapore Academy of Law and other bodies to define those vocabularies and aims to train 100,000 bilingual AI users.
For small and medium enterprises she wants AI tools to feel like IKEA furniture — stress-tested, certified, with instructions, so a small business owner can confidently assemble. The Hawkers Go Digital experience showed the real need was not many apps but two things — mobile payments and online ordering. She also confirms a National AI Council chaired by the Prime Minister with six ministers, with over 60 AI centres of excellence already established.
On governance she repeats the 'speed with safety' line: start with existing law and risk frameworks, plug gaps where they exist — this year's agentic AI framework is one such extension. On junior roles, she counters the fear that they're disposable: AI speeds up prototyping but bottlenecks at integration, and AI-native juniors may end up the change agents. Citing lessons from the EU AI Act, she warns against rushing to regulate and stacking rules to undo earlier rules.
Full transcript
Caption language: en · Fetched: 2026-05-02
So, I remember the first time I talked about AI bilinguals, um the response was uh a very good one. Uh people understood what it meant. It It means that your um first language, you still need to be very good at, but you need to acquire another language. And uh so people understand um that you need both. And it's actually when you are able to put the two together that you are more effective. Um the real challenge, I think, is that um in learning a new language, um there is the concept of a minimal vocabulary. Uh so, if you're learning English language, you know, you will encounter maybe even members of uh your family, especially the older generation, they only know a smattering. They know a few words.
And you know that that minimum vocabulary is not good enough for them to be able to read The Straits Times or to write an email, uh to converse with someone else. So, the fluency requires a certain minimum vocabulary. I believe the experts say that for English language is something like 4 to 5,000 words. And then for Chinese language, it is, I think, something like 2 to 2,500 characters to reach a certain level of fluency. We're not talking about just being able to go to the market and buy something. So, depending on the language that we're talking about, the minimum vocabulary is different. So, the same, if you think about what an uh uh uh professional accountant will need to know by way of AI, what is the AI minimum vocabulary in order for them to be functional and effective in the AI age.
And likewise, if you are a lawyer, you are a legal practitioner, you will need a facility with AI, but the minimum vocabulary for AI that you need may not be identical to what the accountant needs. Which is why our approach is not to try and go it alone. Our approach is to work with the professional bodies. So, with the accountants, uh fortunately, there is ISCA. I can't remember what it stands for. ISCA stands for Institute of something. >> [laughter] >> But ISCA is very on board. They know that the accounting profession uh will be potentially disrupted um by uh AI. The question is, can it be disrupted in a good way? And it doesn't have to be disrupted in a bad way primarily. So, we are working with them. We're trying to, in a way, uncover the minimum vocabulary requirements for AI that uh an an accountant will need.
Likewise, with the Singapore Law Society, with support from colleagues in MinLaw, we also want to uncover, what is the minimum vocabulary for a lawyer to master so that they can be fluent enough in AI that will help them in their work as legal practitioners. So, that's the concept. And I think um if we work together with these professional bodies, we have a better chance of identifying the correct way. So, it's not just willingly, everybody just learn AI. Okay? Uh basic level AI is uh not harmful to anyone. Just like when you go to the market, being able to speak a little bit of Malay is actually quite helpful. But, you know, that sort of vocabulary will be very confined to, okay, terima kasih, you know, and is that all?
That's obviously not the level of vocabulary that, you know, will will make uh a professional uh effective in in in this age. Now, then, when it comes to the enterprises, um to some extent, um the the concept that works better is um uh the the IKEA type of uh concept. Which is that, you know, it is very difficult if you want to use digital tools, use AI tools, and you say that it has to be done in a bespoke manner. Meaning that it has to be uh uniquely created for your business, because that almost certainly could be more costly, right? And you need to uh in a way help the smaller enterprises to truly acquire the ability to build, because the democratizing aspects of AI actually does allow some building ability.
So, just like uh you and I uh over time uh acquired some confidence in building a table, uh a bookshelf that um we buy from uh IKEA. We buy the components, it comes with instructions. But it hasn't reached that point yet. Uh for many of the enterprises, um it's still too difficult to use. I I know that my staff is using it, but what does it mean for my business? Right? How can I use it in a a cleverer way? A part of the reason there was a whole craze about Open Claw was precisely because it turned out to be potentially one of those IKEA products, right? Can, you know, use AI agents to automate some of the work that I now need humans to do. The problem is that it's not quite IKEA.
Uh you are probably too young to remember this, but when I was um you know, first time visiting uh IKEA store, I remember there was one particular display that was quite intriguing. I think they showed this big sofa, and then they had uh this contraption that was banging away at uh the sofa. And the idea was that IKEA was trying to tell me that it's safe to use, because this sofa is not going to collapse on you. It looks flimsy, but it's not going to collapse on you, or for that matter, a bookshelf or table, because it's gone through multiple testing. So, if you're a young parent, you know, don't be scared, right? It It It It looks very inexpensive, so you are worried about its quality, but I can tell you that it has gone through this multiple testing uh sessions, and so you can bring it home, it will be okay.
We will need to get to a point where certain tools have safety built into it. We will need to get to a point where there is assurance that tools that can be used by enterprises have um a certain label that says that, you know, it's been tested, and it it's not going to end up causing you to suffer harms of this nature or that nature. So, for enterprises, I think we are also at this phase of discovery. We need to know, sector by sector, what will be the furniture that uh will be more relevant. So, if you are okay, uh where where you sell stuff, you have multiple customers, maybe you uh a chatbot will be very useful for you to deal with customer queries, because you can't be, you know, answering customer queries all the time. Uh uh uh AI chatbot that can help you to do that.
Maybe it's very good, because it can tell your customer, okay, come at this time, because this is when we are open. Don't come at this time, because it will be very busy. Some of this information is already readily available. It's just that customers may not go check it. They find it by habit, call the store holder, right? Yeah. So, this could be one. It is, in a way, how we discovered through Hawkers Go Digital that when you talk about hawkers going digital, it's not a a whole range of sophisticated applications. They need to, for example, be able to accept mobile payments, and they may also need to be able to take ordering online. And these are just basically the two things that they will need.
So, we will need to find, sector by sector, which are the, you know, in a way, killer apps, the tools that they will find most useful, and then you can, you know, uh also ensure that uh they are the tools have been tested for safety, then I think you will get a better chance of, you know, uh uh uh having the benefits of AI experienced by more people. Otherwise, it's just still theoretical. So, I would say that um both for AI bilinguals uh and uh uh the 100,000 that we want, and uh the enterprise part of it, uh the 10,000 that we hope to get going, the idea is really to get to the point that um you know, the minimum vocabulary is well understood, and then for enterprises, it is uh what's the key uh range of furniture, you know, that uh comes with safety features that you should be thinking about using. Put it that way.
The broad concept we have is uh speed comes with safety. So, speed with safety is an important concept, right? So, uh everything that appears, we talked about earlier that the digital domain is dynamic, expanding, vast already. Uh every new thing that comes about, we should not be surprised. That is the nature of the domain. Um and the approach that we should take really is that for almost all of them, there is the scope for proper use. And then there is a scope and the potential of misuse. So, prop- proper use, as well as misuse, and you need to be able to understand both. And then the question is that when it is misused, what is the severity of the impact, and what is the scope? So, you need to invest time, build up the capabilities to understand, and then you need to say, what are the correct mitigating measures to put in place?
The first port of call when thinking about mitigating measures is to look at existing legislation, existing regulations, existing um you know, risk management frameworks. And if the answer is not very satisfactory, then clearly we need to then organize ourselves. And part of the reason uh uh we introduced earlier this year a governance framework for AI agents is to respond to this need. That is to say that we don't think it is um enough for us, you know, to rely on what we have said about uh AI governance for generative AI, uh what we have said about you know, uh the the correct approach uh when dealing with classical AI. You need to always not be satisfied, and you need to look at how the safeguards need to be made clearer. Then you don't stop there.
Uh going back to the AI example is that it's one thing to have an framework uh uh to have a set of principles that ought to be observed. It's another thing to be able to test for it. So, that's what we've been working hard to do, which is that what is the next uh thing that you can do for the industry. Can you find some way to test for it? And when you are able to test for it, then there is a greater assurance. So, the idea that safety is actually integral to speed. Without safety, you don't dare to move all that fast. So, speed with safety is a concept that we want to push when it comes to AI adoption and also uh with AI safety.
I would say that uh if there is a trade association or professional body that is very forward-leaning, that encourages us to want to prioritize that sector simply because government on its own won't be able to develop all the right answers uh and uncover what the minimum vocabulary uh is really. You need people in the professions to come forward to say that here's what we need. And I think it is also important for us not to do it piecemeal and one-off. You know, because the AI isn't static. It is moving. Today you may think that this is the minimum vocabulary. 1 2 3, right? What you need. Um in another month it will be something else. So, establish accept being able to establish a relationship with a professional body, I think, is really important because they must see this as their own interest to advance.
And we also want them to build up the capacity for continuous improvement uh in terms of the capabilities of their members. So, that's what we would be looking at. Uh what I can share with you is that uh we have been in discussion with the HR profession. Um and uh since I talked about it um uh the logistics sector has also been very forward-leaning. They want to see what more they can do for their members and so on. So, you know, more to come, I hope. Yeah. Was that the second part to your question? Yeah. I think um it will not be a single metric that can tell you whether and it's also not just the success of an AI bilinguals program. Uh what we would like to see, of course, are two things. One is uh employability stays high. This is a very important thing.
People, you know, don't mind you telling them about all these nice fancy programs, but ultimately what they want at the end of it is an assurance. I still will be able to have meaningful work. Uh there will certainly be adjustments that I have to make, and I'm prepared to make those adjustments, but I would like to continue to be able to take care of my family through my own means, and that means contributing to work. So, one very top-line metric that we will see ourselves contributing towards is whether employability of professionals remain high. Um I would also say that um it's not just employability. I think over time we want to make sure that Singapore continues to provide good opportunities for people to advance. And advancement must certainly also involve your salaries, your wage levels.
So, we will be looking at these kind of broader uh indicators of whether as a whole the economy is still able to create good jobs, sustain good jobs, and individuals that are part of the workforce continue to be able to make progress. Can you attribute uh the success of these efforts solely to AI bilinguals program? I think the answer is no. But can you see it playing playing a meaningful part to ensure that this happens? I think the answer is also definitely yes. So, we're quite clear about that. The two go hand in hand, actually. When I was um you know, in the labor unions as a MP from the labor movement uh in the late 2000s uh one thing I learned about workers is that the motivation to acquire new skills is very much um you know, dependent on whether the employers have a good use for it.
Whether it is their current employer or their prospective employer. So, the businesses uh the enterprise sector being able to keep up with the motivations of the workforce is equally important. You can't have one and not the other. If you have a workforce that is not interested to uh invest in securing their relevance in the future, it won't work. Then the companies will be banging their head against the wall uh uh on against the wall and saying that, you know, we need to transform, we need to try this new thing, and the workforce is basically saying that we're not interested, we're very comfortable, we're very happy with our lives. Uh and please don't disturb us, bosses. You can't have that, right? The technology is ready, people are not ready, won't work.
But on the other hand, if the employers are enthusiastic, you know, uh whether it is because of unions' outreach efforts, and they are very important part in in helping to give the workers an understanding of what is required in the future, and they also have a lot of programs to support um their reskilling of the workforce. Um but their enthusiasm meets with silence from their employers, and you know, basically no resource, no encouragement, no support if you want to try out new methods of doing things with AI. Then it will also, you know, mean that uh the enthusiasm doesn't effort doesn't go very far. I think you need the two to go hand in hand, which is why when we designed the national AI program, we said you can't have one and not the other.
You need to have something to do with the workforce, the specific group, you know, not just the base of users, which is important, but still you need people who are going to be um champions of their profession in advancing the use of AI. But equally, you need to look at the enterprises. You need to help them to see that there is this possibility of building your own, you know, and there is and it may not be as difficult, you know, it may be as as um easy in time to come as assembling a table, a bookshelf, right? So, you need the two to go hand in hand. And national AI impact program cannot have only one and not the other. Uh the beginning years of a person's career um are foundation building. And we know from uh studies that have been done in the past.
Uh for example, if a person graduates into a very soft employment market, um then actually the implications for that for that cohort, for the for those individuals, is actually quite long-lasting. It's not, you know, after the downturn the upturn comes and then everything is hunky-dory again. There is always the risk and there's a potential that you have some scarring because in the most formative years you you didn't get all the opportunities that you could have had in order to build up your skill set. So, we are very mindful of that. There is a there is a parallel when it comes to language learning. You know, children during COVID times, they weren't able to observe um the the the the teachers mouthing words, and we are very concerned as to whether that has a lasting impact, and can you actually, you know, make up for it?
So, the the there's a parallel when it comes to a a person who is new into the workforce. So, it's definitely the government's intention not to allow this to happen. Now, what have we done in the past that could sort of signpost, you know, possible interventions? Of course, I'm not saying that we are going to introduce these interventions just yet because the situation is quite dynamic, it is quite fluid, and what we have learned in the online safety and also with AI tools is you don't necessarily jump at it. You know, you you mustn't uh you you have to have a good cadence and agility to respond, but you you can't be jumping at everything, right? You need to watch it a little bit because many things come and go, too, right? It don't last doesn't last very long. So, you have to watch it.
Uh but just to signpost what we were able to do in in, you know, past occasions where there was a concern about graduate employment. Uh during COVID, for example, we came up with the SG United a scheme where trainees and we help them to acquire this initial start to their careers. And the government came in generous support towards the employers. Basically, the message to the employers is that it's useful for you to have good talents, but at the same time, please make sure that they are not just filling the numbers, they're actually contributing. So, I think that is one type, you know, the SG United sort of traineeship scheme that we've been able to do in the past. But each challenge is different. The circumstances, you know, that we will find ourselves in may not be identical to what we needed to do during COVID.
So, we are continuing this discussion with MOM colleagues, with MTI colleagues, and we are monitoring, watching it very carefully, and our approach is that when you see something possibly becoming systemic and that, you know, it could create a a an effect that is long-lasting, that is not reversible, that is perhaps when you need to make sure that you must step in. But this is not something you jump into. So, just to give that assurance, we don't dismiss these concerns at all. We take them very seriously, and we're watching what exactly is happening. The idea that entry-level staff are no longer useful and needed is also being challenged. I have, for example, spoken with very senior people, even for software engineering. And they tell me something quite interesting.
They say that it is true that um you know, the the AI agent may be able to help you to get quite a bit of the work done quickly. But there is a new bottleneck. The bottleneck is not in the proof of concept of any tool. You know, the prototyping is very easy, but you can't stop at prototyping. You need to integrate it to the existing workflows. And actually, integration becomes the bottleneck. So, you Yes, the AI definitely improve your productivity in getting the proof of concept, but the integration, you got stuck. So, until organizations find ways of dealing with the new bottleneck, the flow-through of the impact is not there, and so the feared the feared consequence that suddenly all these software engineers nothing to do, you know, is also not materialized. That's one perspective.
The other perspective is that if you are used to doing things [clears throat] in a certain way, it doesn't come naturally to you to invert it and think of doing it in a completely different way. You are not AI native, so even with the best of intentions, you still think in terms of the pre-AI, you know, era how to get something done, and it's less easy for you to just sort of, you know, do it in an AI native way, AI first way. Now, this is where new hires potentially can serve as the, you know, agent of change. They There is a AI agent, but the human is still needed as the agent of change. The agent that has figured it out, is able to demonstrate, and is able to say, "Here's how we would potentially integrate it. " Because they don't come with as much baggage.
So, it remains to be seen which, you know, of these will move faster, which will which will be the stronger force. But just to highlight that there are, you know, opposing forces at play that either, you know, um support the notion that entry-level hires will will have a difficult more difficult time, or support the notion that actually entry-level hires may be exactly what organizations need to be able to use AI effectively. What I said earlier, that um you know, junior and a junior entry-level hires not necessarily um you know, having a harder time depends on the enterprise. Uh and and the difficulty with uh integration, that applies to seniors, too.
Um even if the individual were to acquire the AI skills, even if they went through, you know, support to become AI bilingual, whether the organization is ready to help them translate it into, you know, effective use and and and integrate it to the existing workflows, processes, and systems, that potentially that bottleneck may mean that, you know, they continue to be able to do their work not in the most effective way, but for the longest for a longer time. Except that the enterprise itself may be at risk. So, then maybe a disruptor comes along. And then maybe you have this individual to say that, you know, if uh if if um if one employer is not appreciative of my bilingual skills, maybe another employer will be more appreciative, and the other employer is the one that gives more upside.
So, I think it would be fair to say that uh the adjustments will definitely you know, be foreseeable. The adjustments is a question of is it within the same organization or within the sector? There will be certain individuals that um will find it difficult to keep up with what their organizations are trying to do, and they may find that their skills are better used elsewhere. So, a certain amount of labor mobility, I think, still needs to be upheld. And then that goes to our ability to help people acquire job I mean um you know, employment-relevant skills. So, that goes to how our whole continuing education and learning also needs to be updated.
So, you see on the MOM and MOE side this sense that, you know, workforce and skills needs to come back together again because the environment has changed, and tighter nexus will be more useful for potential job seekers and also the companies in terms of redesigning their work. Um you also see the government continuing to look at our programs, strengthen support for individuals to the extent that even if they have to go back to school and take a new qualification, a post-grad sort of qualification, then support in terms of, you know, helping you to manage your financial commitments. Those are, I think, schemes that uh have received good feedback, and we will look at whether newer interventions will are required to help people who are more senior in the in the workforce also stay relevant.
Uh but the upshot is this, it goes back to what I said. We don't want growth for growth's sake. We want growth that uplifts many businesses and many members of the workforce. So, if we see that a certain uh domain, a certain sector, the way in which AI is being implemented is impacting people in not the right ways, we would then have to see what are the uh interventions that we need to introduce, and then also look at what we can potentially do to support the the the people who are affected. The good thing is that we are not starting from ground zero. We already have a lot of these existing schemes. We have the resolve, and we also have the resources to make it Well, firstly, what what is the signal? Well, it's not specifically a signal to send to the world. It's not specifically a signal to send to our citizens.
It's it's just the right way of doing it because the scale of the impact and the scale of the benefits that we hope to achieve is so wide, um we thought that it it deserves a level of attention of a national AI council, and the PM has been very involved. Uh he launched the national AI strategy 2. 0 that has, you know, continuously asked for updates, various aspects that we are implementing. Um so, to give an example, when we started, we thought that we would try and grow a base of AI centers of excellence within companies. And I have to admit that the reception, the the enthusiasm of responses, you know, so that by now we already have more than 60 of these AI centers of excellence. That pace actually surprised me, you know, in that it was faster than I I thought would materialize.
So, we've reached a point where we say that we can afford to take the next step. And that's how, you know, the NAIC comes to be formed with PM chairing it with the six ministers. It's very typical Singapore cabinet style. It is teamwork, you know, um and in uh for me, certainly, because the of the involvement in MDDI, uh uh also being plugged into the international conversations both on the adoption side but also on the safety side. So, support and work together with the other colleagues to try and advance the agenda as best as we can. To round back, um the it has taken internationally, although we were planning for it, uh internationally it has been talked about. Uh a colleague of mine was in the UK and was attending an AI-related conference or workshop, and he said that quite a few of the participants had read PM's speech.
And um you know, they they that means it's it's being noticed. It's being noticed. But beyond knowing that your PM chairs a national AI council, I think people are more interested in what the council was able to you know, was able to push through. Um so, it goes back to the national AI missions. It goes back to the champions of AI program. It goes back to the national AI impact program. There will be a lot of interest. Were you able to really help so many enterprises and so many professionals acquire these AI-relevant skills? And conceptually, how did you you know, think of it? And how did you implement it? So, I think there is a lot of interest. Now then, to round back, um there is continued interest in how Singapore tries to balance governance and safety uh requirements with innovation and adoption.
Uh the way we have looked at it is that safety and governance are enablers for adoption and innovation. They are they are not bad things per se. Uh however, if you're not careful, you may end up having the appearance of safeguards without actually being able to achieve the effects the intended the desired effects of the safeguards, and all you have done is to put roadblocks in the way. So, being thoughtful about uh how we introduce regulations, how we introduce guardrails, I think is still the Singapore approach. It means that we can't attempt to do it ourselves. It means that we must talk to our colleagues internationally, um the uh latest thinking. And because the field is still um so nascent and and it's evolving very quickly, almost on every topic there will be opposing voices. One sees it this way. One sees it another way.
And then the third one comes and say, "No, no, no, no, no, you're both wrong. It's actually this third way. " So, the approach that we take is that we will also have a viewpoint. We will also provide our take on it. May not fully agree with everyone, but it is a viewpoint that meets our requirements, supports our own interest to have speed with safety. What we attempted to do uh when we introduced the code of practice for designated social media services was that first, it has to be a uh a live code. We never had the expectation that the code is done once and for all. It's just not going to happen in the digital domain. You know, every time we put out something, we are very conscious of the fact that the day we put it out, it's already a little outdated. That's just how fast the digital domain works.
So, our our our whole ethos is that keep improving it. Don't assume that it's done done. So, the idea that the code needs to be refreshed, needs to be updated on a regular basis, I think was was with us from day one. Having said that, you must start somewhere. Uh the lesson we took from colleagues around the world that attempted to do similar things was that you can bite off more than you can chew. You can articulate a very broad set of measures that attempts to cover every thing, and you will find that it is very difficult to operationalize because you were either not specific enough or you were too prescriptive. Neither of which is helpful. I you know, not so long ago interacted with colleagues in the EU. As you know, the EU has the EU AI Act. They've had the Digital Services Act, the Digital Markets Act.
Um and they they are introducing, you know, measures to basically uh adjust the the the requirements that will be asked of. But they have to do it in a way that doesn't go against the principles that they stood for. Now, this is a very difficult thing to do. Is you put regulations in place, and it's almost like I need to put more regulations on top to say that some of the regulations no longer need to apply. Just imagine how hard that is to do. So, that is a very useful lesson to to take away. And that in our eagerness to regulate, just be a little bit careful and mindful that you >> [snorts] >> you don't end up doing stuff that you regret later. So, um that's uh one important lesson we take away. So, now then, what has been our experience? Even on day one, were we satisfied with the code? No.
We always knew that there were things that we wanted to do better. So, the proper way to do it, however, is that you introduce it. You establish, here are the things that we need to see done. Then you have 1 year. Then you make a report. Please tell me in these areas that you're supposed to have, you know, introduced safeguards, how effective they have been. And this is also an iterative process. You receive one set of reports, and you realize that no, you know, the way the reporting is done is not satisfactory. So, we've got to go back and say that next round we want more of this or more of that. But be that as it may, even your initial set of requirements already allow you to see whether directionally the regulated entities, the designated social media services in this case, whether they're moving in the direction that you want to see.
So, you don't expect it to improve overnight, but you do want to see improvements over time. Now, then there is another aspect. Well, there are six of you. We get a chance to see who's doing better and who's really not trying as hard as they need to. And if you fail to meet the mark, we have to put you on notice. And so, in this case, when it comes to child sexual exploitation materials and terrorism materials, we've basically told X, we've told uh TikTok, "Not good enough. Whatever safeguards you put in place, our testing shows that it didn't prevent all of it. " They will have to hear us out, and then they will have to do better the next round. So, it it is this is back and forth. Do we prefer to have something that you can push a button and it's all done? Of course we do, but it's not going to happen this way.
So, there will be a bit of this back and forth. That's a lesson that we've taken. We When it comes to age assurance, perhaps it is um more useful if I uh deal with it in the context of how many jurisdictions are thinking about social media bans. I I imagine that is uh very much on your minds, too. Um what are we seeing? And um what is our own thinking? Uh first, what we are seeing. Uh Number one, yes, there are many jurisdictions that want to introduce a ban uh based on age. But not everyone wants that. Estonia, for example, is very clear. They don't believe that that's the right approach. Now, the Estonians are actually very thoughtful people, and Estonians have been on the receiving end of many uh cyberattacks. They are very familiar with this territory. They are familiar with online harms.
They are familiar with um you know, the dangers of the digital world. So, they are not unthinking about it, and they also have a very vibrant digital economy. So, they are country whose whose views, you know, deserve some understanding and attention. What are they thinking of? They are thinking that you cannot you cannot expect your citizens to be able to navigate the digital world without having put them in in in it. So, they need to learn. So, their their perspective is that no, we don't even believe that below a certain age, you know, the the children should have no exposure to uh social media. They think that it's part and parcel of life. You've got to learn to deal with it. So, that's the first thing that we see. Not everyone believes that an age-based ban is the right thing to do.
The [snorts] second thing is that By the way, it's not just the Estonians. Um I think it is Belgium. Belgium also decided not to sign on to what the EU was doing, which is the age of majority for social media. And that's because one of their regions objected to it. But anyway, the what's the other thing that we have observed? The other thing that we have observed is that some jurisdictions, quite interesting, like a New York, the governor isn't banning social media access uh for children under certain age. But, the governor wants to ban certain features. Uh I think the feature that uh specifically New York wants to ban is algorithmic feeds. Algorithmic feeds. That means it's personalized to you, it's personalized to you. They think that for children, no, you should not be on the receiving end of algorithmic feeds.
So, they want to ban that. So, they've passed it into law um at the state level. And I think the social media companies have 180 days to respond. And so, we are watching it. You know, what is the response? Are they able to do it? So, that's useful. Uh that's not all that we are seeing. The other thing that we are seeing is this. Quite a few of the jurisdictions that have bans, actually the ban is lifted very easily with parental consent. So, it is no access except if the parents say yes. There's actually not a real ban. Yeah, it's it's very easily lifted. You know, just as long as you are able to say that there was a parent that agreed to it. But, tell me online, you know, how do you know the parents that yes? How do you establish the relationship that it is the parent? You would do you check the person?
You I don't know how it is possible to do that. But, anyway, we are interested to see the implementation. You may say you need an adult. Yeah, but even if you were able to ascertain that the person who gave consent is an adult, how do you know it's a parent? Right? So, anyway. So, the parental consent seems to be a a valve, you know, it seems to us that there are these jurisdictions that feel that they are under pressure, they need to introduce a ban, and parental consent is a way to exit from this uh ban. Uh another thing that we have observed is that the whole topic of age assurance um you know, isn't adequately addressed. Because you can have a ban uh and it can be effective provided you have robust age assurance. But, if you don't even know exactly the age of the user and and you can't be very certain, right?
Then, actually you know, it's too easy, right, to circumvent. So, having robust age assurance measures is another very important part of what you try to do in regulating the online uh safety. Uh one final one which I would just like to call out um which is that uh it's well known that there will be migration risk. So, if you make a service so difficult to access, a young person may in fact say, "Forget it, you know, I can go elsewhere. " And droves of them can migrate. The problem is you do not know where they migrated to. And they will stay under the radar because they are determined to stay under the radar. Right? So, as to avoid being one of those channels that also have a ban implemented. So, the migration risk, unfortunately, is present, but it will not be visible for a long time.
And you must you must factor that in to the consideration. So, that even if you are determined to net somebody else into the network into this coverage of bans, you you won't know where they are until much later. So, those are the the observations. So, what do we think? We think that notwithstanding the fact that we already have some guardrails, I mentioned them in the opening remarks, online safety regulations still needs to be strengthened. No question in our mind. And the first place to start is really with age assurance. You need to be able to tell uh quite accurately the age of the user in order for whatever protections you intend to apply to them to be actually uh made available. You can have a so-called teen account, which I think Instagram has.
But, if you don't know that you're dealing with a teen, then so what if you have the teen account? Right? You you didn't bring it out when it needed to be brought up. So, age assurance is something that we will definitely do. We've talked about it. And it's it's part of the uh enhancements of the online safety regulations that we will certainly implement. But, there is more. There are also enhancements to safety features that we'd like to see. Um in the first iteration of the code, we focused mainly on content. Content. But, we know that individuals when they go onto social media, it's not just inappropriate content. Especially for children, it's not just inappropriate content. Too much sex, too much violence. That's one part of the concern. But, there are also other features that are concerning.
For example, the features uh of um direct messaging, which then means that a young user may be on the receiving end of unwanted attention, unwanted interaction. Parents tell us that they are very fearful of that. And I can uh completely understand because in the physical world, we warn our children. We say, "You know, don't anyhow speak to strangers. " But, if the strangers can reach their children online, what was the parent to do? So, this is one type of safety feature that we think needs to be strengthened. The other could be features that promote excessive use. So, for example, autoplay. You watch something, and then before you're done, something plays again. So, is there a way in which we can you enhance the safety of the service and make sure that these features are properly dealt with.
So, the way in which we are doing it is that we are engaging the social media services both on age assurance and also enhancements to the safety features. And we will need them to lean forward. And I should emphasize this. Because each service is designed differently, then it's very unlikely that you have the same safety requirements across the board. There will have to be some differentiation because of how you design the safety features, or rather you design the service differently. So, it cannot be uh it Although it is nice to say that, you know, we have the same expectation, uh but you need to account for the fact that the service is designed differently. So, the the same expectation of a particular design or a particular feature, you need to you know, cater to the different designs.
So, I think that's um what we are also focused on doing. Be quite clear for this particular service, how do you make sure that the safety is built into it adequately for children. And uh if I may put it this way, um you know, if a car did not have the adequate safety measures, we wouldn't want anyone to drive it. And so, we need to be mentally prepared if the safety features are not good enough, then we'll have to say, "Is it roadworthy? And should children be exposed to it? " So, that's something that we are seriously thinking about. Um I would just perhaps add uh two more points. One is that Remember I was uh describing, you know, for road safety, the car must be safe, but the road also must be made safer. And you know how we learn to be safe on the roads? It's our parents teach us, "Okay, don't dash across the road. " And so on.
So, I just also just want to reference that in many of these jurisdictions, eventually the regulators just says parental consent. We see um you know, how digital parenting becomes very challenging. And the parents have to deal with many responsibilities. They also have to deal with a world that you know, surrounds their children that they themselves did not quite grow up in. They themselves are not experiencing to the same extent. Uh parents these days could be Instagrammers, but the children are on TikTok. So, it's not an identical service. So, how do they understand the world of TikTok that their children inhabit? And how do they, you know, give correct guidance?
So, this idea of digital parenting, uh helping parents know more about what safety habits they should cultivate, and also um helping the parents to exercise parental controls on different platforms. That is a practical way in which we can support them. Many parents are not familiar with the parental controls, even though they are there. They just don't know how to use it. And they are also not knowing how the children overcome it. So, we think that this is a complementary, but an equally important move. So, I should just say that for online safety, the observations and um um what we think and we want to do about it. We will be strengthening the guardrails. We believe that online safety regulations can be enhanced and we will definitely start with age assurance requirements.
But going beyond that, we want to ensure age-appropriate experiences for children who use these services and we don't expect to get to the right answers ourselves. We intend to consult the public and when I talk about the public, specifically, we want to consult parents. We think it is very useful to consult the youth users themselves. We want to hear from them. You know, to them, what is safety? To them, what do they see as features that are perhaps not too helpful? And what do they consider to be a reasonable experience when they go on a social media and what they also welcome, you know, by way of child safety features, safety features. So that's kind of like the approach that we are taking.
If you find that a particular service is just like a particular make of a car is not safe to use, you must seriously then think about taking it off. And so that's the kind of approach that we are prepared to take. We we would have to look at the specific design of the service. We would have to look specifically as at the kind of child safety features that can be introduced and then we will assess. And yes, if we need to take this vehicle off the road, we may have to. There are actually a number of areas of great concern. One is of course scams. So we've had scams workstream, you know, up and running for several years now. We are very concerned about how in particular the scammers impersonate government officials in order to perpetrate their scam and they prey on the trusting nature of seniors.
And so the I believe that MHA not so long ago gave some um numbers as to how the scam numbers are panning out. It it suggests to us that some of the measures are working. So amounts lost to scams, number of scams being reported in the most you know, recent reporting period, which is 12 months, compared to the previous 12 months, actually has come down. So it's some encouragement for the direction in which we are taking. There will be more measures that MHA Incidentally, I still continue to chair that work group. But there will be more measures and I think at an appropriate time they will talk about it, but the measures are not stopping. What you have also highlighted is another area of concern, which is AI-generated disinformation. And they are worrying. They are indeed very worrying. There are different types of concerns.
One type is, you know, they try to uh you know, tear away at your trust in in in in institutions that tend to have been trusted. You know, so for example, Prime Minister is at the receiving end of this kind of AI-generated content. So they will stop at nothing, right? So must know. But it's not just that. We were already very worried about AI-generated content being used to misrepresent candidates in an election. So the Illeona Act we at the time we looked at it, we say the safeguards are not good enough. So we need to quickly think about what is the right regulation, the law to put in place and we did. So GE2020, I think we didn't see you know, AI-generated content as being a a problem. Didn't become a factor. But you're not always in an election.
I think particularly in more recent times, you would certainly have come across AI-generated content that clearly stoked hate. Yeah, they are stoking hate in society. They are pitting one community against another community. So our approach is again to go back to existing levers and say if and when similar types of AI-generated content pervade our um online space and are intended for Singapore audiences, what do we do? And whether the existing levers are good enough. And if they are not good enough, then we are prepared to strengthen them. So that process is ongoing. I think the first thing is that you have to be able to call it out. And when you call it out, you must also have credibility.
So if you will recall some years back, former Prime Minister Lee Hsien Loong spoke at the National Day Rally to already socialize citizens to the reality that they would be on the receiving end of influence campaigns, that they are getting certain, you know, types of content that paint some countries in a very positive light and paint some other countries in a very negative light. And if you recall, then Prime Minister Lee said to the citizens that you must ask the question, why are you receiving this type of content, right? And not blindly accept the contents. So you you have a need to call it out and and to specifically point to it. Then you also need to have other measures in place. I think in our case, we have laws against fake news, online falsehoods. We have laws again foreign interference.
And each time the government takes action, for example, preemptively banning certain websites because of the potential for it looks quite likely that they can be misused. I think that also is a signal to citizens. The citizens' vigilance can can be um you know, compromised because there's only so much we can take. Even reminders and and and all of this is um can wear a person down. You know, I I don't want to hear it anymore. This happens, right? Then you also need um to preserve an infrastructure of fact. So public service media, you, your role, what you write, what you produce. People must feel that if and when they have an interest in a topic, they need to know that you can be trusted. They don't have a doubt that you are trying to, you know, lie to them, cheat them, you know, present anything that is not truthful.
So if we didn't have that, another piece of it will be missing. You can't rely on one thing. You can't rely only on your Prime Minister talking about it all the time. You can't rely on one or two laws. You can't rely on public service media alone. You need all of this to work together. I should just make a plug for also that NLB. NLB is also part of this infrastructure of fact. We continue to build up NLB as a resource. We help people to widen their diet of information. We create them as attractive places. Yes, you go there, you you know, you you feel comfortable, but there is also another very important objective. Citizens need to feel that being informed is is is part of, you know, their um normal life. And they must still want to be informed by the trusted sources. So you need all of these working together.
And if you find that there are still areas that need further work, as we have found. For example, in the past, you know, the public service media, you know, the the economic model worked. So we didn't really need to step in the way that we do now. But now it doesn't, so we have to step in. So that's the approach that we take. We are very fortunate. It's progressing very well. I can confirm for you that we will be able to meet the planned timeline. It will be operational. It will open its doors end of June this year. And we will be able to say more about the substance of the online safety commission. Primarily, you need good people to staff it. And and what that means is that you need to find the right person to be the commissioner because that role is a very heavy one. It's the inaugural commissioner.
He or she will have to set the tone and, you know, decide on the SOPs that are implementable. But equally, there will have to be an appeals panel. So you also need the right people to be part of the appeals panel. So what I can share with you that these things are progressing very well. Give it another month or so, we will be able to provide much more details.
Related Videos
Josephine Teo on enterprise AI adoption and online safety regulation
2026-03-31 · Josephine Teo · 02:30
Josephine Teo says the government is prepared to intervene if enterprise AI adoption falls short of expected outcomes, while releasing IMDA's second Online Safety Assessment Report.
#SGBudget2026: supporting local enterprises
2026-03-13 · govsg · 01:00
Three enterprise support measures in Budget 2026: a 40% corporate tax rebate, an enhanced MRA internationalisation grant, and support for enterprise AI adoption via the Champions of AI programme together with EIS and PSG.
Economic Strategy Review: Global Competitiveness Committee — AI reshapes the economy
2026-03-12 · Lawrence Wong · 03:30
The Economic Strategy Review lays out how Singapore stays competitive as AI reshapes the global economy — from tech hub status to taking homegrown firms global.
Singapore's National AI Impact Plan: supporting 10,000 firms and 100,000 workers
2026-03-02 · Josephine Teo · 02:45
Minister Josephine Teo announces the National AI Impact Plan, targeting training of 100,000 AI professionals and support for 10,000 firms by 2029.
Josephine Teo urges nations to proactively address agentic AI governance risks
2026-02-20 · Josephine Teo · 05:12
At the World Economic Forum, Josephine Teo unveils the world's first agentic AI governance framework and calls on nations to proactively shape AI governance rules.
Budget 2026: a strong push on AI and jobs
2026-02-12 · Lawrence Wong · 27:01
CNA's in-depth read of AI-related Budget measures, including the National AI Council, AI tax incentives and workforce transformation.