AI Industry & Applications · 2025-06-04 · 12:00
Introducing the LLM Application Developer Programme (LADP)
In Brief
The AISG-SGTech LADP programme helps enterprises accelerate LLM application deployment, covering core skills including prompt engineering.
Key Takeaways
- McKinsey: 65% of organisations now use generative AI in at least one function, up from 33% a year earlier.
- LADP is a 4-month AISG programme: 4 weeks of deep skilling plus 12 weeks building a prototype, about 10 hours per week.
- Teams of 2-4 work on a real company problem with internal data; RAG and guardrails ground LLM outputs in source documents.
- LADP has worked with over 40 organisations across education, finance, IT, legal, tourism and manufacturing.
Summary
AISG engineer Maldri walks through LADP: a four-month structured programme that delivers two outcomes — a working LLM application aligned to a real business problem and an upskilled team ready to build the next one. The curriculum splits into a four-week deep-skilling phase covering NLP fundamentals, transformer internals, prompt engineering from zero-shot and few-shot to chain-of-thought, agents, RAG pipelines, plus ethics and governance. The twelve-week project phase runs at about ten hours per week, opens with three intensive workshops at AISG, and continues with regular mentoring by practising AI engineers up to a week-16 demo.
Teams stay between two and four members, sponsored by their company with a real use case and direct access to internal data. Both first-time coders and seasoned engineers are welcome. Two prior projects illustrate scope: a utility and energy company's NextGen Simulator generates fresh cyber-physical incident scenarios on demand and scores trainee response plans against a curated vector store; an electronics retailer's Insight Stream uses an aggregation agent to pull sales and inventory data, picks the day's KPIs, then has a second LLM write a plain-English narrative that lands straight in Slack.
LADP has worked with more than 40 organisations across education, finance, IT, legal, tourism and manufacturing, with several returning for multiple projects. Eligible participants receive substantial AISG funding and may qualify for additional Career Conversion Programme subsidies. Details at aap.sg/adp.
Full transcript
Caption language: en · Fetched: 2026-05-02
Generative AI is no longer experimental. It is already reshaping day-to-day operations across industries. A recent McKenzie survey found that 65% of organizations now use Genai in at least one business function, up from 33% the year before. Companies that turn large language model capabilities into dependable production grade tools over the next 12 months will set the standard. The LM application developer program, LAP, offers a structured four-month pathway that delivers two outcomes. A working LM powered solution aligned to your business problem and a team equipped to build the next one. I'm Maldri, AI engineer at AI Singapore. I'm the lead engineer for this very interesting program, the ALM application developer program.
Through this program, my team and I work with a very diverse range of organizations to accelerate their adoption of generative AI and large language models in their business workflows as well as train their staff in the development of such LM applications. Today, I'll be sharing more about this program with you. First, a quick word on AI Singapore. As Singapore's national AI program, we build AI capabilities across five pillars. LAP sits in the AI innovation pillar, helping companies adopt the use of AI through talent development programs, AI standards and industry focused initiatives. In LAP, you will tap the full power of large language models. LMS are a core technology driving the engines of today's generative AI. What exactly is an LM?
LMS are deep learning models with billions of parameters trained on a vast mix of data, open source code, news and social media posts, countless pages from the open internet and entire books. By absorbing patterns across all that text, it learns to understand context, reason over information, and generate fluent responses on demand. In lep participants learn to leverage on LMS through basic to advanced prompt engineering techniques. Prom engineering is the practice of crafting and optimizing input prompts to elicit the desired behavior from LMS. In other words, it's simply telling the model exactly what you want, like a teacher guiding a student. Each instruction is a prompt. Add a few label examples and you are doing few short prompting.
the fastest way to steer an LM for any task from classifying customer emails to extracting key clauses in legal contracts. On screen, you'll see three example pairs guiding the model. The fourth line shows it applying the pattern to a brand new phrase. Designing precise prompts at any complexity level is exactly what participants master inside LP. Large language models do have their limitations and even the best models sometimes hallucinate. They sound confident, but the facts are wrong. Here's a quick example. Ask a general LM how to apply for LEAF in your HRA portal. Without access to that private system and knowledge, it invents a CPF workflow that doesn't exist. In LAP, we solved this by grounding the model in your own company's data using retriever augmented generation and gut rails so answers stay accurate, relevant, and trusted.
From our conversations with companies, two hurdles keep coming up. First, skill gaps. Teams are eager yet short on practical hands-on genai experience. Second, trust and output reliability. Leaders need proof their LM applications weren't hallucinate and mislead. LAP removes both barriers. Over four months, you and your team apply state-of-the-art LM techniques to your data mentored by our AI engineers and leaves with a reliable deployable prototype. LP's curriculum is delivered over two phases over a total of 16 weeks. The first phase is the deep skilling phase over four weeks where you will learn about the theoretical knowledge of LRM and related tools and techniques. In the second phase, you will start building your envision LRM powered application to solve your company's own problem statement over a period of 12 weeks.
During the deep skilling phase, we provide e-learning materials to all participants. The total self-arning hours expected to go through all these self-arning materials is around 8 hours. We start by grounding you in NLP fundamentals and the inner workings of modern large language models. Next, you'll master prom engineering from zero short and fuel shot to chain of thought techniques and more advanced techniques to control model outputs with precision. Then we move to agents, introducing React and building retrieval augmented generation pipelines so your LM can reason with external tools and data. We wrap up deep skilling with ethics and governance equipping you to build LRM solutions that are safe, fair, and responsible. After deep skilling phase will be the 12 weeks project phase. During the project phase, plan on up to 10 hours a week.
That's 120 hours total dedicated to building your LM power application. You use that time for self-directed tasks, prepping data, researching, designing prompts, coding, testing, and iterating. At the start of the project phase, we host you at AI Singapore for three intensive knowledge transfer workshops. Think of these sessions as deep dives, new techniques, live coding, and immediate feedback on how to plug each lesson into your own prototypes. For the majority of the project phase, you'll meet your assigned AISG mentors who are practicing AI engineers for multiple scheduled consultations, either online or offline. We review your code, troubleshoot roadblocks, and adjust scope so the project stays on track. In week 16, each team demos a working LM powered application to the AISG team where you can also invite your company stakeholders.
It's a showcase of real impact, live walk through, key metrics, lessons learned, and the road map for taking your solution into eventual deployment. We kept each team at two to four participants. Two is the minimum to divide task and keep momentum. Four is the ceiling so every voice is heard and decisions stay quick. A small agile unit delivers faster iterations exactly what rapid gen AI prototyping needs. Every LAP team is company sponsored with a business use case. That means you'll be working on a real business problem you care about with direct access to your internal company data. Because the program is part-time and designed for working professionals, you continue your day job while you build the prototype. You can be a first-time coder up to a seasoned software engineer to join LEAP.
Our mentoring adapts to different technical capabilities, so everyone contributes to the project and everyone upskills. It's time to move beyond the idea of LRM as simple chat bots. Think of them as reasoning engines that automate business workflows and multiply your team's output 10 times. I'll highlight just two LM powered applications built by the previous LAP intakes that show how LMS can transform real world business workflows. First, an LM powered next generation simulator that lets a company from the utilities and energy industry train staff on incident scenarios at scale. Second, an LRM powered insight stream for electronics and retail reporting that turns raw sales and inventory data into clear action items in minutes. The utilities and energy company needed faster scalable training for complex cyber physical incidents response.
Yet training still depended on thick PDF manuals and one-to-one coaching. The result was slow, inconsistent drills and knowledge trapped in silos. Through LAP, the team built NextGen Simulator, an LM powered training platform that launches realistic cyber physical threat scenarios on demand and scores response plans instantly so staff can drill at scale. Under the hood, a scenario generator LM spins up a brand new incident narrative every time a drill begins. A second model, the response plan LM scores each training strategy in real time and returns targeted feedback. Both models tap a curated vector store of past threats and incident locks. With retrieval augmented generation, they pull only the context needed for the current scenario, keeping every exercise dynamic, relevant, and accurate. Let's look at insight stream next.
The electronics retailer needed instant narrative reporting to steal day-to-day decisions. Yet, analysts still sifted through raw sales and inventory data manually by hand, then spend more time crafting stakeholder reports. This delayed action and let critical trends sleep by. Here's how this team built Insight Stream. An aggregation LM agent, an LM wired to data cleaning and metric calculation tools, pulls fresh figures from the sales and inventory vector store, decides which KPIs matter today and packages them. That bundle goes to the inside engine LM, which polishes the numbers to a plain English narrative story with clear action points. The finished narrative lands instantly in the team Slack channel. So decision makers can see the insight the moment it's created.
Across our intakes, we've worked with more than 40 organizations, some returning from multiple projects to embed generative AI and LMP powered applications in their day-to-day businesses. We've had the privilege of helping teams in education, finance, IT, legal, tourism, manufacturing, and more. guiding each one from that first spark of an idea all the way to a live solution that they now rely on every day. Seeing those systems in action is why we do this work and it's proof that a small focused team can bring generative AI into even the most complex operations. What you and your team take away from LAP is an LM powered application built around your business problem statements, an upskilled team that feels confident working with LM tools and techniques, and a transferable framework you can pull out for the next project.
Before we wrap up, I'd like to be fully transparent and display the program fees for your reference. Eligible participants receive significant AISG funding and many qualify for additional career conversion program subsidies. Our team will guide you through the funding options during our first meeting. If all of this sound valuable to you and your team, I recommend you to visit aap. sg/adp for more details. You can also consider joining our live Zoom webinar sessions where we'll show more examples and answer your questions live. Thank you for spending this time with me here today. I'd love the chance to hear about your own business challenges and explore how we can tackle them together. Hope to see you soon.
Related Videos
HSC Pipeline Engineering: building an engineering knowledge base with RAG AI
2026-03-20 · HSC Pipeline Engineering · 05:00
Through the AISG LADP programme, HSC Pipeline built a locally deployed RAG AI knowledge base, breaking down engineering-knowledge silos and improving decision-making efficiency.
Ong Ye Kung on AI, genetic screening and preparing for a super-aged Singapore
2026-03-04 · Ong Ye Kung · 30:36
Health Minister Ong Ye Kung talks through AI applications in healthcare and Singapore's strategy for a super-aged society.
YTL PowerSeraya: LLMs power electricity market rule analysis
2026-02-20 · YTL PowerSeraya · 05:00
Singapore power company YTL PowerSeraya used LADP to build an LLM specialised in electricity market rules, enabling automated report analysis and rule queries.
Skybots: from RPA to LLM-powered customer service
2026-01-15 · Skybots · 05:00
Accounting-tech firm Skybots used LADP to upgrade RPA into LLM-powered customer service, handling complex accounting workflow queries.
Josephine Teo on AI's role in SMEs, education and society
2025-11-19 · Josephine Teo · 06:17
Josephine Teo on how AI can help SMEs transform, reshape education, and reach every part of society.
Josephine Teo on how AI uplifts Singapore's financial services industry
2025-10-06 · Josephine Teo · 09:49
Josephine Teo discusses prospects for AI applications in Singapore's financial services and the regulatory balance required.