January 31, 2026

Marking the Close of Better.sg’s AI/ML Mentorship Programme (2nd Edition)

By
Keni Lim
Audrey Tim
January 31, 2026

Six months, ten sessions, and more than a hundred mentors and mentees later, Better.sg’s AI/ML Mentorship Programme wrapped up its second run with a closing ceremony on 22 January 2026.

The programme pairs students and early‑career professionals with seasoned AI/ML practitioners, giving mentees opportunities to gain hands-on experience, sharpen their critical thinking, and build projects that address real-world problems. The closing event celebrated the community’s efforts and distilled practical lessons for anyone working with AI today, and looking to do bring an impact for social good.

Responsible AI as an Operating Model, Not a Checkbox

“Responsible AI isn’t a checkbox; it’s an operating model.” - Daniel Lim, Meta

The evening opened with a keynote by Daniel Lim, Head of Public Policy at Meta.

Daniel shared how responsible AI cannot be treated as an afterthought or a compliance exercise. Instead, he outlined how it needs to be embedded as an operating model, by reshaping how teams design, build, evaluate, and deploy systems from the start.

He walked through Meta’s five core pillars that guide their AI development:

  • guarding against severe risks
  • protecting people’s privacy
  • ensuring transparency and user control
  • maintaining clear accountability
  • building inclusive AI that benefits everyone

Daniel also highlighted how AI is actively reshaping traditional trade-offs between quality, cost, and speed, and what that means for businesses and builders alike.

How AI Is Repricing Software Work

“Software work is being repriced - less code, more thinking.” - Ned Lowe, MISSION+

“If you can’t explain why you trust an output, you’re not ready to deploy it.” - Shery Chan, Standard Chartered Bank

This was followed by a panel discussion moderated by Giuseppe Enriquez, Head of Strategy at Better.sg, featuring:

  • Ned Lowe, CTO & Co-Founder at MISSION+ (formerly CTO at Singlife)
  • Shery Chan, Director, AI Product Strategy & Adoption at Standard Chartered Bank

Ned offered a provocative perspective: as AI reduces the time required to write and generate code, the industry will increasingly reprice work away from “building” and toward framing, specifications, evaluation, and assurance. In other words, speed alone becomes less valuable but clarity and judgment become the key differentiators.

Shery added a grounding counterpoint from enterprise adoption. While AI capabilities are advancing rapidly, she emphasised that AI raises the premium on human judgment, not removes it. In regulated and high-stakes environments, teams must be able to explain why they trust an output and not just that a model produced it.

Together, the panel underscored a recurring theme: AI may automate tasks, but responsibility, reasoning, and trust remain deeply human.

Using LLMs Like a Practitioner, Not a Tourist

The final segment of the evening was a practical deep dive by Calvin Tan, Co-founding CTO at Pints.ai and an AI researcher by training.

Calvin unpacked why large language models have advanced so quickly, pointing to scaling laws and the impact of increased compute. More importantly, he shared hands-on techniques that practitioners can apply immediately, including:

  • prompting models to reason step by step
  • keeping prompts concise to avoid degraded performance
  • repeating critical instructions at key points in a prompt
  • treating prompt design and evaluation as an ongoing research process

His message was clear: effective use of LLMs is less about clever tricks and more about systematic experimentation and disciplined evaluation.

Closing One Chapter, Carrying the Lessons Forward

Three takeaways: responsible AI isn’t a checklist; software work is being repriced; and human judgment still matters.

- Responsible AI isn’t a checkbox - it’s an operating model. Daniel Lim (Meta) shared how guarding against severe risks, protecting privacy, ensuring transparency, maintaining accountability and building inclusive AI should shape every project.

- AI is repricing software work. On a panel with Ned Lowe (MISSION+) and Shery Chan (Standard Chartered), we heard how less time will be spent typing code and more on framing problems, writing clear specs and evaluating outputs. Shery reminded us that AI doesn’t replace judgment; it raises the premium on it.

- Use LLMs like a practitioner, not a tourist. Calvin Tan (Pints.ai) urged builders to prompt models step‑by‑step, keep prompts tight, repeat critical instructions and treat prompt design as a research process.

As this chapter closes, the message is clear: responsible AI requires intention, rigour, and human judgement. Want to be part of the next edition? Follow Better.sg for updates.  

Thank you to our partner Meta, and organisers Giuseppe Enriquez and Madeleine Koh, for making this programme possible.