Blog

  • Singapore’s TFR – is the Education System really the issue?

    This video is extremely on point. I am heartened to hear that the Minister is well aware of all the issues.

    It is important to distinguish between why people don’t want to have kids, versus why they don’t want to have MORE kids. I think the reasons we hear from parents about the education system is one of the key reasons why many stop at 1 or 2. Of course, housing size, general affordability, and access to private vehicle will also affect these decisions.

    People not wanting to have a child altogether is — as minister points out — often a lifestyle choice. Why give up a nice (and tidy) house just for two, a car (not a minivan), annual travels instead of tuition fees, being able to easily immigrate for job growth? These all are likely reasons some avoid having children entirely.

    I personally didn’t think that much, I just “went with the flow”. But I have to be very frank that if it wasn’t for my wife — and now my children — I would probably have left the country for career opportunities when younger. 

    That said, the points on the education system really hits home. Perhaps one thing many do not know is that the PSLE is a one-of-a-kind system in the world. No other country puts children at age 11-12 through a nationwide high stakes exam. Malaysia had something similar and also did away with it some years ago. In most parts of the world, students automatically progress to middle or secondary school without an exam. Typically, entrance exams only apply if you wanted to get into a prestigious or private school.

    PSLE was devised in early post independence Singapore because we didn’t have the capacity for every citizen to have secondary education, and also because the education standard of the nation was low, PSLE was necessary as a blunt filter.

    But today, after all the tweaks and adjustments to the system over decades, the system is probably due for overhaul. The idea that we should do away with it, or push it further, is starting to make more sense. Perhaps regular, smaller exams should be used more as a diagnostic exam, rather than a singular high-stakes selection/placement exam like PSLE. The subject-based banding subtly happens (as it does now at primary 4) at every stage — perhaps every year or two — all the way to JC.

    The video also mentions a few important points about the post-AI world, just like the post-Internet world where I grew up in: is rote memorisation and certain knowledge skills still desirable or practical? How do we teach our next generation about creativity, cross-domain synthesis, learning to learn, judgment, morals — all the “human” things that we shouldn’t be outsourcing to AI?

    Further reading: History of the Singapore Primary School Leaving Examination

  • Open Letter to Ministry of Transport to implement SimplyGo Express Travel Card

    Dear <REDACTED>

    Hope this email finds you well. I would like to ask MOT why our SimplyGo isn’t available as an Express Travel Card on Apple’s iOS.

    If you use an iPhone and have been to other countries including Japan, HK, China, Korea and even USA, their travel cards are available in the Apple Wallet and it is very convenient.

    The Express Travel Card allows it to be used without authentication (Face ID) and can even be used when the phone battery is flat. It’s designed for fast entry and exit so users can simply tap their phone without ever having to open up the wallet app and authenticating before presenting the card to the reader.

    In addition, the availability of the travel card on the phone reduces the need for users to buy a physical card, or download and additional app. It’s not only more environmentally friendly – it will also be much more convenient for tourists.

    I hope MOT will consider asking SimplyGo to implement the Express Travel Card on Apple’s iOS. Singapore prides itself as one of the top places in the world for public transportation, and we should keep up with the times.

    For more information:
    https://support.apple.com/en-sg/105123

    Best Regards,
    Justin Lee

  • AI is replacing us because we’re getting lazier

    There are articles all over the Internet suggesting that AI will likely overtake humans because of its superior intelligence. But as an Adjunct Lecturer teaching the next generation of our workforce, I see a very different, more troubling picture. In fact, I’m very, very concerned.

    AI is not replacing people because it’s too smart – it is replacing them because too many (young) people are getting (very) lazy.

    Struggles Cultivate Deep Thinking

    We’ve entered an era where students and professionals alike can summon AI to write essays, generate code, answer technical questions, and even prepare reports with minimal input. I’m not even gonna lie about myself using ChatGPT to assist in writing this article – these tools are undeniably useful.

    But instead of being used to deepen understanding or accelerate learning, AI tools are too often being used to bypass the thinking process altogether.

    In my classes, I’ve noticed a sharp decline in students’ ability to reason through a problem. When presented with a coding exercise or a systems design question, many instinctively turn to ChatGPT or similar tools not as a partner, but as a crutch. They copy, paste, submit, and move on.

    The troubling part isn’t the use of AI. I advocate for responsible use of tools. The problem is the mindset shift. Students no longer struggle with problems; they are outsourcing the struggle. And in doing so, they’re missing the critical phase where actual learning occurs.

    A Systemic Problem

    This habit of mental offloading isn’t just a student issue. It’s a consequence of how we design our assessments, our learning environments, and our expectations.

    Many computer science courses today rely heavily on coursework and take-home assignments, which were great in the past – but today are easily completed with AI assistance. If we’re assessing output without scrutinising the process, we’re inviting this behaviour. We’re telling students: “We care that it’s done, not how you did it.”

    So naturally, they’ll take the fastest (aheem, laziest) route!

    Rethinking Assessment in the Age of AI

    We need to rethink how we teach and assess in AI-enabled classrooms. Here are a few ideas that I believe must become mainstream, especially in coding and technical disciplines:

    1 – Reverting to Closed-Book Assessments

    We need to bring back exam-style assessments. Closed-book exams and practical coding tests can help differentiate between those who’ve genuinely understood material and those who’ve coasted on generated output.

    2 – Live Presentations and Walkthroughs

    More emphasis should be placed on students explaining their thought process aloud – through live code reviews, technical walkthroughs, or project demos. If they can’t articulate why they chose a certain algorithm or how they structured your app, they probably didn’t understand it.

    3 – Practice Testing and Distributed Practice

    Rather than one or two big assignments, we need more frequent, lower-stakes practice tests spread out over time. This supports long-term retention and builds foundational understanding. Students should be repeatedly exposed to problems in slightly varied forms to encourage generalisation of concepts.

    However, it is also important to bear in mind that this also places more workload on teachers.

    4 – Focus on Problem Formulation

    We should assess the ability to ask good questions, define the problem clearly, and justify trade-offs. These are skills AI tools are unable to do without human assistance, and are also skills that remain essential in professional engineering environments.

    Laziness is Human Nature

    AI encourages the human tendency to avoid the hard work of thinking. If we’re not careful, we’re going to raise a generation of engineers who can prompt tools but can’t think critically, debug effectively, or innovate independently.

    The most valuable engineers, designers, and analysts in the future will not be those who blindly use AI, but those who know when to trust it, when to doubt it, and how to surpass it.

  • Why Government Tech Tenders in Singapore Needs Revamp

    I recently spent a considerable amount of time working with a government client to explore an app development project. We went deep into understanding the problem, brainstorming possible solutions, identifying constraints, and scoping out practical approaches. But just as we were starting to make progress, the project got yanked into the familiar black hole of bureaucracy: a rigid, poorly-defined tender was issued, seemingly designed to tick boxes than solve the actual problem.

    This isn’t the first time. But each time it happens, it’s disheartening.

    A Process That Punishes Innovation

    These government procurement processes are built to be transparent and fair, and rightly so. It would work prefectly fine for buying a bunch of laptops, or constructing a basketball court. But when it comes to technology projects, the structure of these tenders is fundamentally broken. The system rewards those who can respond to overly prescriptive, unrealistic tender documents, not those who best understand or can solve the underlying problem.

    Too often, these tenders are written after the agency has already gone out and sought informal advice – sometimes even prototyped or trialed solutions – but instead of incorporating these learnings into an agile, iterative approach, they fall back on old-school waterfall RFPs.

    The Problem with Fixed Requirements

    Technology, unlike construction or hardware supply, is not a domain where you can define everything upfront. The entire software industry has moved away from rigid requirements-gathering for good reason: we don’t always know the end solution, but we do know the problem we’re trying to solve.

    That’s why #Agile practices have become the norm – because they accept uncertainty as a reality and focus on iterative progress toward a shared goal. Yet in the tendering process, government agencies are forced to cast vague assumptions into stone, creating a scope that no vendor can fulfill without overcharging, over-engineering, or outright guessing.

    The result? Vendors quote sky-high prices to protect themselves from the ambiguity – or worse, they underquote, win the bid, and the project collapses mid-flight due to unrealistic expectations, scope creep, or misalignment between the stated requirements and actual user needs.

    Focusing on Process Instead of Problems

    Perhaps the most frustrating aspect is the focus on process over outcomes. So many tenders are framed around implementation timelines, documentation deliverables, and checkbox compliance without clearly articulating the real-world pain points or desired business outcomes.

    We should be seeing tenders that begin with:

    • “Here’s the problem we are trying to solve.”
    • “Here are the constraints and stakeholders.”
    • “We want your expertise in helping us figure out the best solution.”

    Instead, what we get is a 20-page legal document and:

    • “Build X, Y, and Z with ABC tech stack.”
    • “Deliver within six months.”
    • “Provide five user manuals and a training deck.”

    A Call for Change

    Singapore has bold ambitions in tech: Smart Nation, GovTech, digital transformation across the public sector. But these ambitions are being held hostage by legacy procurement practices that actively undermine the principles of good software design and delivery.

    What we need is a new paradigm for technology procurement, one that:

    • Starts with problem statements, not rigid feature lists
    • Embraces agile, iterative delivery models
    • Allows for co-creation with vendors, not just transactional handoffs
    • Encourages value-based evaluation, not just lowest cost or most compliant

    Let’s stop punishing good faith collaboration with broken processes. Let’s start solving real problems together.

    Because if we keep doing this the old way, we’re not just wasting vendors’ time – we’re wasting taxpayers’ money.

  • What is Good Code?

    What is Good Code?

    Many junior to mid-level engineers have misconceptions about what “good code” truly mean. Unfortunately, these misunderstandings are often reinforced by flawed hiring practices. LeetCode problems, parroting SOLID principles, or memorizing framework features might showcase technical knowledge, but they don’t inherently make someone a good Software Engineer.

    Mess is Everywhere

    Throughout my career, I’ve encountered my fair share of messy codebases. An example would be functions/methods that stretched over a thousand lines of business logic—an unmaintainable monstrosity. Such code is a hallmark of inexperienced teams and often plagues poorly managed software outsourcing projects. I’ve probably written my share of such code when younger too.

    Seasoned engineers would say that this is “common.” Even at tech giants like Google or Microsoft, codebases aren’t pristine. Messes are inevitable, and documentation can also be inconsistent.

    Still, there’s a difference between an unavoidable mess and a completely avoidable disaster.

    If It Ain’t Broke, Don’t Touch It?

    A few years ago, I was troubleshooting a particularly stubborn issue with another team. The tech lead said that adding more code to an already bloated, thousand-line controller method was risky. He wasn’t wrong—it could take days to figure out where to make even minor changes, and the risk of breaking something was high. To add to the problem, the codebase did not have unit tests.

    But what happened next left me speechless.

    The Undocumented, Untraceable Code

    The tech lead decided to “solve” the issue by writing a standalone PHP script, completely outside the Laravel framework we were using. His justification? Frameworks were “too slow” and “too complicated.”

    His script lived in some random folder on the server, undocumented and untracked. Hours of debugging later, we stumbled upon it by sheer luck. And it wasn’t a one-off—we later found there were several such scripts scattered across the server, mostly undocumented and introducing untraceable logic into production.

    At that point, nobody cared about the quality of his algorithms (they were terrible, by the way) or whether his code followed SOLID principles (it didn’t). The real issue was far worse: he prioritized personal convenience over team collaboration. His decisions created a codebase that was not only a nightmare to maintain but also actively sabotaged the team’s ability to function effectively.

    Coding Beyond Yourself

    As software engineers, we don’t work in silos. Writing code that you alone can understand is easy. Writing code that a hundred others can maintain? That’s the real challenge.

    The example above is a cautionary tale of what not to do. Good engineering isn’t about showing off your technical prowess; it’s about making thoughtful decisions that benefit the team, ensure long-term maintainability, and foster a culture of collaboration.

  • Rethinking Technical Interviews: Lessons from My Experience

    Rethinking Technical Interviews: Lessons from My Experience

    Earlier this year, after being laid off, I went through several interviews for technical roles. These interviews often involved take-home tests, coding assignments, and live coding sessions. While I completed a few, I eventually started declining most of them, finding many to be time-consuming and, frankly, ineffective.

    The Limits of Coding Tests

    Coding tests can serve as a basic filter for entry-level positions, but their value diminishes when applied to senior-level roles. If you’re hiring a Senior Engineer with 10–20 years of experience, coding proficiency isn’t the primary skill to assess—especially in a world where AI tools like ChatGPT can handle many coding tasks faster and more efficiently.

    Instead, the focus should shift to evaluating Problem-Solving, Critical Thinking, Learning Aptitude, and Communication Skills—competencies that I find many interviews overlook. These are the skills that enable senior engineers to lead, adapt, and contribute meaningfully to a team.

    The Core Skills: Problem Solving, Critical Thinking, Learning, and Communication

    These skills apply to candidates across all experience levels. Over the years, I’ve hired many mid-career switchers, often with limited coding backgrounds. People ask how I gauge their suitability, and my approach is simple:

    • Assess their problem-solving ability.
    • Understand their interests and what excites them.
    • Observe the quality of their questions and how well they articulate their thoughts.

    While I do conduct technical screenings to ensure foundational competency, I avoid assigning time-wasting take-home tasks or algorithmic puzzles that don’t reflect real-world job demands.

    Navigating the Era of AI-Assisted Interviews

    The rise of AI tools this year has also transformed interviews. Candidates can now use AI dicatation off-screen to assist with technical questions, making traditional coding tests even less reliable indicators of ability.

    To counter this, I focus on questions AI can’t answer effectively:

    • What are your hobbies?
    • What are you learning now, and why?
    • If you could explore something new tomorrow, what would it be?
    • What’s the most challenging or interesting project you’ve worked on?
    • How would you approach solving this real-world problem based on a scenario?

    These questions help reveal a candidate’s genuine interests, adaptability, and approach to problem-solving.

    The Rapid Pace of Technology

    Over my 20-year career, technology has evolved very quickly. I’ve worked with Turbo Pascal, PERL, Java, PHP, C, C#/.NET, Swift, Python, JavaScript, and countless frameworks, libraries, tools and operating systems. Every shift required adaptability and a willingness to learn.

    A person who can learn and adapt will thrive as technologies, tools, and frameworks continue to change.

    Final Thoughts

    Hiring the right people isn’t about filtering for a specific tech stack or testing for algorithmic skills your team may never need. It’s about finding individuals who can solve problems, adapt quickly, and communicate effectively. Those are the qualities that matter—and they’re what will drive your team forward.