There are articles all over the Internet suggesting that AI will likely overtake humans because of its superior intelligence. But as an Adjunct Lecturer teaching the next generation of our workforce, I see a very different, more troubling picture. In fact, I’m very, very concerned.
AI is not replacing people because it’s too smart – it is replacing them because too many (young) people are getting (very) lazy.
Struggles Cultivate Deep Thinking
We’ve entered an era where students and professionals alike can summon AI to write essays, generate code, answer technical questions, and even prepare reports with minimal input. I’m not even gonna lie about myself using ChatGPT to assist in writing this article – these tools are undeniably useful.
But instead of being used to deepen understanding or accelerate learning, AI tools are too often being used to bypass the thinking process altogether.
In my classes, I’ve noticed a sharp decline in students’ ability to reason through a problem. When presented with a coding exercise or a systems design question, many instinctively turn to ChatGPT or similar tools not as a partner, but as a crutch. They copy, paste, submit, and move on.
The troubling part isn’t the use of AI. I advocate for responsible use of tools. The problem is the mindset shift. Students no longer struggle with problems; they are outsourcing the struggle. And in doing so, they’re missing the critical phase where actual learning occurs.
A Systemic Problem
This habit of mental offloading isn’t just a student issue. It’s a consequence of how we design our assessments, our learning environments, and our expectations.
Many computer science courses today rely heavily on coursework and take-home assignments, which were great in the past – but today are easily completed with AI assistance. If we’re assessing output without scrutinising the process, we’re inviting this behaviour. We’re telling students: “We care that it’s done, not how you did it.”
So naturally, they’ll take the fastest (aheem, laziest) route!
Rethinking Assessment in the Age of AI
We need to rethink how we teach and assess in AI-enabled classrooms. Here are a few ideas that I believe must become mainstream, especially in coding and technical disciplines:
1 – Reverting to Closed-Book Assessments
We need to bring back exam-style assessments. Closed-book exams and practical coding tests can help differentiate between those who’ve genuinely understood material and those who’ve coasted on generated output.
2 – Live Presentations and Walkthroughs
More emphasis should be placed on students explaining their thought process aloud – through live code reviews, technical walkthroughs, or project demos. If they can’t articulate why they chose a certain algorithm or how they structured your app, they probably didn’t understand it.
3 – Practice Testing and Distributed Practice
Rather than one or two big assignments, we need more frequent, lower-stakes practice tests spread out over time. This supports long-term retention and builds foundational understanding. Students should be repeatedly exposed to problems in slightly varied forms to encourage generalisation of concepts.
However, it is also important to bear in mind that this also places more workload on teachers.
4 – Focus on Problem Formulation
We should assess the ability to ask good questions, define the problem clearly, and justify trade-offs. These are skills AI tools are unable to do without human assistance, and are also skills that remain essential in professional engineering environments.
Laziness is Human Nature
AI encourages the human tendency to avoid the hard work of thinking. If we’re not careful, we’re going to raise a generation of engineers who can prompt tools but can’t think critically, debug effectively, or innovate independently.
The most valuable engineers, designers, and analysts in the future will not be those who blindly use AI, but those who know when to trust it, when to doubt it, and how to surpass it.