Speech Lab
AISG Speech Lab focuses on speech AI for Singapore and Southeast Asia — concentrating on the scenarios where general-purpose speech models fall short: Singlish recognition, multilingual mixed speech (English + Mandarin + Malay code-switching), and local accents.
📖 What it is
Speech Lab's research directions:
- Singlish ASR: automatic speech recognition for Singapore English
- Code-switching ASR: recognizing speech that mixes multiple languages
- Local-accent TTS: synthesizing localized voices
- Dialect preservation: speech AI for dialects such as Hakka and Teochew
Representative work: partnering with local contact centres and government service hotlines to deploy localized speech AI.
🤖 Relation to AI
Speech Lab tackles the same kind of problem SGNLP does: general-purpose speech AI breaks down in the Singapore context.
Commercial ASR (OpenAI Whisper, Google Speech-to-Text, etc.) sees noticeable drops in recognition accuracy on Singlish and code-switching. Speech Lab's localized models fill this gap.
🇸🇬 Relation to Singapore
Speech Lab is the speech-AI counterpart to Singapore's "language sovereignty" narrative.
Across the seven transmission levers:
- Lever 3 (Industry Adoption): rolling out speech AI in local customer service and government services
- Lever 5 (Government Self-Use): voice-enabling multilingual government service delivery
Take: speech AI is one of the most direct landing points for Singapore AI — customer service, public services, and healthcare all need it. Speech Lab's existence means these scenarios get AI that "understands how Singaporeans actually speak."