Tag: programming

  • AI is replacing us because we’re getting lazier

    There are articles all over the Internet suggesting that AI will likely overtake humans because of its superior intelligence. But as an Adjunct Lecturer teaching the next generation of our workforce, I see a very different, more troubling picture. In fact, I’m very, very concerned.

    AI is not replacing people because it’s too smart – it is replacing them because too many (young) people are getting (very) lazy.

    Struggles Cultivate Deep Thinking

    We’ve entered an era where students and professionals alike can summon AI to write essays, generate code, answer technical questions, and even prepare reports with minimal input. I’m not even gonna lie about myself using ChatGPT to assist in writing this article – these tools are undeniably useful.

    But instead of being used to deepen understanding or accelerate learning, AI tools are too often being used to bypass the thinking process altogether.

    In my classes, I’ve noticed a sharp decline in students’ ability to reason through a problem. When presented with a coding exercise or a systems design question, many instinctively turn to ChatGPT or similar tools not as a partner, but as a crutch. They copy, paste, submit, and move on.

    The troubling part isn’t the use of AI. I advocate for responsible use of tools. The problem is the mindset shift. Students no longer struggle with problems; they are outsourcing the struggle. And in doing so, they’re missing the critical phase where actual learning occurs.

    A Systemic Problem

    This habit of mental offloading isn’t just a student issue. It’s a consequence of how we design our assessments, our learning environments, and our expectations.

    Many computer science courses today rely heavily on coursework and take-home assignments, which were great in the past – but today are easily completed with AI assistance. If we’re assessing output without scrutinising the process, we’re inviting this behaviour. We’re telling students: “We care that it’s done, not how you did it.”

    So naturally, they’ll take the fastest (aheem, laziest) route!

    Rethinking Assessment in the Age of AI

    We need to rethink how we teach and assess in AI-enabled classrooms. Here are a few ideas that I believe must become mainstream, especially in coding and technical disciplines:

    1 – Reverting to Closed-Book Assessments

    We need to bring back exam-style assessments. Closed-book exams and practical coding tests can help differentiate between those who’ve genuinely understood material and those who’ve coasted on generated output.

    2 – Live Presentations and Walkthroughs

    More emphasis should be placed on students explaining their thought process aloud – through live code reviews, technical walkthroughs, or project demos. If they can’t articulate why they chose a certain algorithm or how they structured your app, they probably didn’t understand it.

    3 – Practice Testing and Distributed Practice

    Rather than one or two big assignments, we need more frequent, lower-stakes practice tests spread out over time. This supports long-term retention and builds foundational understanding. Students should be repeatedly exposed to problems in slightly varied forms to encourage generalisation of concepts.

    However, it is also important to bear in mind that this also places more workload on teachers.

    4 – Focus on Problem Formulation

    We should assess the ability to ask good questions, define the problem clearly, and justify trade-offs. These are skills AI tools are unable to do without human assistance, and are also skills that remain essential in professional engineering environments.

    Laziness is Human Nature

    AI encourages the human tendency to avoid the hard work of thinking. If we’re not careful, we’re going to raise a generation of engineers who can prompt tools but can’t think critically, debug effectively, or innovate independently.

    The most valuable engineers, designers, and analysts in the future will not be those who blindly use AI, but those who know when to trust it, when to doubt it, and how to surpass it.

  • What is Good Code?

    What is Good Code?

    Many junior to mid-level engineers have misconceptions about what “good code” truly mean. Unfortunately, these misunderstandings are often reinforced by flawed hiring practices. LeetCode problems, parroting SOLID principles, or memorizing framework features might showcase technical knowledge, but they don’t inherently make someone a good Software Engineer.

    Mess is Everywhere

    Throughout my career, I’ve encountered my fair share of messy codebases. An example would be functions/methods that stretched over a thousand lines of business logic—an unmaintainable monstrosity. Such code is a hallmark of inexperienced teams and often plagues poorly managed software outsourcing projects. I’ve probably written my share of such code when younger too.

    Seasoned engineers would say that this is “common.” Even at tech giants like Google or Microsoft, codebases aren’t pristine. Messes are inevitable, and documentation can also be inconsistent.

    Still, there’s a difference between an unavoidable mess and a completely avoidable disaster.

    If It Ain’t Broke, Don’t Touch It?

    A few years ago, I was troubleshooting a particularly stubborn issue with another team. The tech lead said that adding more code to an already bloated, thousand-line controller method was risky. He wasn’t wrong—it could take days to figure out where to make even minor changes, and the risk of breaking something was high. To add to the problem, the codebase did not have unit tests.

    But what happened next left me speechless.

    The Undocumented, Untraceable Code

    The tech lead decided to “solve” the issue by writing a standalone PHP script, completely outside the Laravel framework we were using. His justification? Frameworks were “too slow” and “too complicated.”

    His script lived in some random folder on the server, undocumented and untracked. Hours of debugging later, we stumbled upon it by sheer luck. And it wasn’t a one-off—we later found there were several such scripts scattered across the server, mostly undocumented and introducing untraceable logic into production.

    At that point, nobody cared about the quality of his algorithms (they were terrible, by the way) or whether his code followed SOLID principles (it didn’t). The real issue was far worse: he prioritized personal convenience over team collaboration. His decisions created a codebase that was not only a nightmare to maintain but also actively sabotaged the team’s ability to function effectively.

    Coding Beyond Yourself

    As software engineers, we don’t work in silos. Writing code that you alone can understand is easy. Writing code that a hundred others can maintain? That’s the real challenge.

    The example above is a cautionary tale of what not to do. Good engineering isn’t about showing off your technical prowess; it’s about making thoughtful decisions that benefit the team, ensure long-term maintainability, and foster a culture of collaboration.

  • Rethinking Technical Interviews: Lessons from My Experience

    Rethinking Technical Interviews: Lessons from My Experience

    Earlier this year, after being laid off, I went through several interviews for technical roles. These interviews often involved take-home tests, coding assignments, and live coding sessions. While I completed a few, I eventually started declining most of them, finding many to be time-consuming and, frankly, ineffective.

    The Limits of Coding Tests

    Coding tests can serve as a basic filter for entry-level positions, but their value diminishes when applied to senior-level roles. If you’re hiring a Senior Engineer with 10–20 years of experience, coding proficiency isn’t the primary skill to assess—especially in a world where AI tools like ChatGPT can handle many coding tasks faster and more efficiently.

    Instead, the focus should shift to evaluating Problem-Solving, Critical Thinking, Learning Aptitude, and Communication Skills—competencies that I find many interviews overlook. These are the skills that enable senior engineers to lead, adapt, and contribute meaningfully to a team.

    The Core Skills: Problem Solving, Critical Thinking, Learning, and Communication

    These skills apply to candidates across all experience levels. Over the years, I’ve hired many mid-career switchers, often with limited coding backgrounds. People ask how I gauge their suitability, and my approach is simple:

    • Assess their problem-solving ability.
    • Understand their interests and what excites them.
    • Observe the quality of their questions and how well they articulate their thoughts.

    While I do conduct technical screenings to ensure foundational competency, I avoid assigning time-wasting take-home tasks or algorithmic puzzles that don’t reflect real-world job demands.

    Navigating the Era of AI-Assisted Interviews

    The rise of AI tools this year has also transformed interviews. Candidates can now use AI dicatation off-screen to assist with technical questions, making traditional coding tests even less reliable indicators of ability.

    To counter this, I focus on questions AI can’t answer effectively:

    • What are your hobbies?
    • What are you learning now, and why?
    • If you could explore something new tomorrow, what would it be?
    • What’s the most challenging or interesting project you’ve worked on?
    • How would you approach solving this real-world problem based on a scenario?

    These questions help reveal a candidate’s genuine interests, adaptability, and approach to problem-solving.

    The Rapid Pace of Technology

    Over my 20-year career, technology has evolved very quickly. I’ve worked with Turbo Pascal, PERL, Java, PHP, C, C#/.NET, Swift, Python, JavaScript, and countless frameworks, libraries, tools and operating systems. Every shift required adaptability and a willingness to learn.

    A person who can learn and adapt will thrive as technologies, tools, and frameworks continue to change.

    Final Thoughts

    Hiring the right people isn’t about filtering for a specific tech stack or testing for algorithmic skills your team may never need. It’s about finding individuals who can solve problems, adapt quickly, and communicate effectively. Those are the qualities that matter—and they’re what will drive your team forward.

  • Why Unit of Work is an anti-pattern but not Repository

    Why Unit of Work is an anti-pattern but not Repository

    Is Repository and Unit of Work (UoW) an anti-pattern? In the .NET/C# world, it is often said that Repository/UoW is an anti-pattern and one should probably use Entity Framework Core directly.

    But if it is an anti-pattern, why are people still using it? Even Mosh Hamedani – a respected YouTube coding trainer whom I follow – wrote about common mistakes when applying this pattern. Surely it must be popular.

    Purpose of the Repository/UoW Pattern

    Let’s talk about why this design pattern exists. It is actually a type of Adapter pattern and the primary use case is for separation of concerns between domain (or “business logic”) and infrastructure (or “data access”).

    The other motivation for using this pattern is to allow mocking repositories for unit testing.

    Must Repository go together with Unit of Work?

    It is also very common to see the Repository pattern used together with the Unit of Work (UoW) pattern. The UoW pattern is added to manage transactions – a feature of relational databases to perform ACID operations.

    However, there is increasing tolerance for eventual consistency and less atomicity; fully ACID operations are starting to matter less these days as software adopts the microservice architecture.

    So unless you are a bank, or dealing with critical financial data, maybe building your software with transactions management – a feature that is specific to RDBMSes – might not be the kind of dependency you want.

    Why is UoW an anti-pattern sometimes

    I’ll start by saying that everything depends on your use case, but for me – I tend to feel that Unit of Work (but not repository) is an anti-pattern.

    Let me explain why.

    You see, the repository interface can be described in its simplest form as an interface for read/write operations.

    // What a typical repository looks like
    public interface IMyRepository
    {
      MyObject GetById(int id); // Read
      IEnumerable<MyObject> SearchByName(string name); // Read
      void Update(MyObject object); // Write
      void DeleteById(int id); // Write
    }

    I’ll use an analogy here: In good old C, the standard input/output header defines basic read/write operations. The interface <stdio.h> is an abstraction of the underlying data access implementation and is a close example of the repository pattern – the interface doesn’t care if you are writing to the console, to a filesystem, or to a serial port – just like a repository interface shouldn’t care if it was an RDBMS, or a CSV file, or even a remote API call.

    Does <stdio.h> expose specific features of a particular filesystem or a serial port? No. So why should a specific feature of an RDBMS (transaction management) be depended upon outside of a repository interface?

    userRepository.Update(user);
    unitOfWork.Save(); // If this was not done, the record will not be updated!

    Using the C example from earlier: Imagine having to always call fsync() (from <unistd.h>) after calling fwrite() in order to commit changes – does it make sense? Sure – you may call fsync() if you wanted to force your changes to disk immediately, but you do not and should not have to explicitly call it.

    (In .NET there is Transaction Scope, but that is a topic for another day.)

    How much to depend on the RDBMS?

    RDBMSes are great tools with important features, but as an application architect, I often ask myself if it really matters to the application I am building.

    Sure, almost 99% of the time the RDBMS is unlikely to be switched for anything else. Heck, in some applications it may even be the same version of MySQL for the rest of the application’s life – that’s probably because many applications were designed with a database-first approach (and I have another blog article coming up about why a database-first approach should probably be avoided.)

    So you may think, YAGNI – let’s not over-design the application. Maybe in such a case, you may be better off without the Repository/UoW pattern entirely, but… this is 2021, and you are unit testing your code, rightttt?

    Unit tests and mocks

    If you have ever attempted to mock an ORM framework, you’ll know it is pretty impossible. Sure, EF Core has an In-Memory Provider that can be used for testing, but that has a lot of caveats.

    As a result, it would be easier to apply the repository pattern instead of attempting to mock the ORM framework.

    Mocking the test, or testing the mock?

    Ever had unit tests pass but integration tests fail because of database constraints? Ever tried testing code that relied on a transaction rollback? Mocking the behavior of an RDBMS is extremely difficult, and spending half our lives trying to mock how an RDBMS works shouldn’t be the case because we do not want to end up testing our mocks instead.

    // Typical use of UoW
    try
    {
      userRepository.Update(user);
      unitOfWork.Save(); // This commits the transaction
    }
    catch (Exception ex)
    {
      unitOfWork.Rollback();
    }

    Example: How would you mock and test the rollback?

    Instead, we should implement repositories as if we were writing data to regular storage – think of it as writing to a CSV file, memory, Redis cache, or something else.

    Do CSV files have transactions? No.

    Then, maybe we should not use Unit of Work:

    try
    {
      userRepository.Update(user);
    }
    catch (Exception ex)
    {
      // Log an error and fix it manually, retry the operation, place
      // the update in a queue to be processed later if you really HAVE 
      // to make this update, otherwise just throw the exception!
    }

    The generics phenomenon

    The typical Repository/UoW pattern is to make each repository represent a single domain entity, e.g.

    public interface IUserRepository
    {
      public void GetById(int id);
      public void Create(User user);
      public void Update(User user);
      public void Delete(int id);
    }
    
    public interface IGroupRepository
    {
      public void GetById(int id);
      public void Create(Group group);
      public void Update(Group group);
      public void Delete(int id);
    } 

    And as a result, it is common to have these further reduced to a generic interface to reduce the repetition on CRUD methods, e.g.

    public interface IRepository<TEntity>
    {
      public void GetById(int id);
      public void Create(TEntity entity);
      public void Update(TEntity entity);
      public void Delete(int id);
    }
    
    public interface IUserRepository : IRepository<User> { ... }
    public interface IGroupRepository : IRepository<Group> { ... }

    However, quite a large number of applications I have written do not use the full CRUD operations on every single repository. Some tables are read-only, some never require an update, some never get deleted.

    The N-N relationship

    Next, how do I add a user to a group, or assign a group to a user?

    Create yet another repository for the intermediary N-N relationship table and insert a record!

    public interface IUserGroupRepository : IRepository<UserGroup> { ... }
    
    // Usage
    var userGroup = new UserGroup(userId, groupId);
    userGroupRepository.Create(userGroup);
    unitOfWork.Save(); // Always remember to save!

    This is why implementations of the Repository/UoW often end up with a crazy list of interfaces and classes, and it’s probably not easy for a developer to figure out which repository to use. Is it called UserGroup or GroupUser? @#$%^&

    It is also extremely counter-intuitive to be actually creating a new object. In a regular object-oriented code, it would probably be written like this:

    group.AddUser(user);

    Why aren’t repositories written more expressively?

    Unlike <stdio.h> in C that I used as an analogy earlier, repository interfaces are not doing byte-level I/O operations – it handles more complex data types, so methods should therefore be written more expressively.

    For example, why not write the User-Group relationship methods in such a manner?

    public interface IUserRepository 
    {
      public void AddGroup(int userId, int groupId); // Inserts into UserGroup
      ...
    }
    
    public interface IGroupRepository
    {
      public void AddUser(int groupId, int userId); // Inserts into UserGroup
      ...
    }

    Even better – if your application will never ever see the need to store Users and Groups in different data stores, why not simply combine them into one repository interface?

    public interface IAccountServiceRepository
    {
      public void CreateUser(User user);
      public void CreateGroup(Group group);
      public void CreateUserWithNewGroup(User user, Group group); // Can use a transaction
      public void AddUserToGroup(int userId, int groupId); // Inserts into UserGroup
      ...
    }

    (Imagine you were writing to the UNIX /etc/passwd and /etc/group – how would you implement it?)

    One may argue that I’ve come full circle and am replicating a UoW but at the same time also violating the single-responsibility principle. Then again, what is the “single responsibility” of a repository? Often the term “single responsibility” is being taken out of context from what the originator (Robert C. Martin) had expressed it to be: “A class should have only one reason to change”. What external or structural influences may cause the interface above to change?

    Lastly, if CreateUserWithNewGroup() required a transaction for an atomic operation, shouldn’t the transaction be managed at the repository rather than by the domain? Should the onus of transaction management be placed on the domain layer or the repository layer? Is handling transactions in the domain logic also violating the single-responsibility principle?

    Conclusion

    Once again, if your software or company really, really, really depended its life on having consistent data in an RDBMS, by all means, continue to use the Repository/UoW pattern (or simply use the ORM directly since it is a hard dependency.)

    But for a large majority of cases, YAGNI. A single repository interface alone is probably good enough.

    This blog article was more of me thinking out aloud than trying to encourage or influence a change in how people implement repository/UoW, and comments are most certainly welcome.

  • Stop returning null

    Stop returning null

    Have you ever debugged a null error? It’s like a void space. The error often doesn’t tell you anything. Null handling sucks the life out of developers.

    Developers should stop returning null. Modern programming languages have exceptions – use them.

    There was a time when we avoided throwing exceptions, because exception handling is thought to be slow, and has other criticisms as well.

    However, returning null instead of throwing an exception is far worse.

    Let me explain.

    If you were tasked to write the implementation of this method to retrieve a user:

    public User GetUserById(int id);

    If the user with value of id is not found, you are probably tempted to return a null value.

    But what is null? Is it an invalid id value (e.g. id <= 0)? Is it because the user is not found? Is it because maybe the user record has been disabled?

    In most cases I have come across, a null return would mean that something erroneous has happened. “User does not exist” is an error!

    All is fine if your team is disciplined and religiously handle null across the entire application, but chances are very slim because it is quite difficult to consistently handle null when built-in types (such as int or double) will never be null.

    Uncaught null exceptions are terrible to debug, not only because the null exceptions don’t tell you much, but also because it often requires back-tracing many lines of code to figure out how and why you got a null.

    Uncaught null values are no different from uncaught exceptions, and if you have been writing code long enough you’ll know that null exceptions are one of the most common exceptions you have to debug.

    If a developer above had thrown something like UserNotFoundException for the method/function above, life would be so much easier – even if it was uncaught. Making it a habit to throw an exception as part of input validation or error handling forces you to think about the error scenario and error message.

    Null is bad for health. Null exceptions are like black holes. Null is less than nothing…

    “What do you mean less than nothing? I don’t think there is any such thing as less than nothing. Nothing is absolutely the limit of nothingness. It’s the lowest you can go. It’s the end of the line. How can something be less than nothing? If there were something that was less than nothing, then nothing would not be nothing, it would be something – even though it’s just a very little bit of something. But if nothing is nothing, then nothing has nothing that is less than it is.”

    E.B. White, Charlotte’s Web

  • Why are multiple subnets needed in vSphere ESXi?

    Why are multiple subnets needed in vSphere ESXi?

    Most know that having different subnets for management traffic, vMotion, storage, etc. is a best practice but some may not understand why.

    Routing 101: Two paths to the same place

    Let’s take for example you have a laptop and is connected to both wifi and LAN at home/office (on the same subnet). Which connection is being used when you browse the Internet or even print to a local network printer?

    The answer is the first connection you are hooked up to, or the more technically correct answer is the route that is of a higher order of preference. When two NICs are connected on the same subnet, your local routing table will have two entries to the directly connected subnet, e.g.

    192.168.1.0/24 via en1 metric 100
    192.168.1.0/24 via en0 metric 100

    When a packet is sent (e.g. to your gateway at 192.168.1.1), the operating system will look up the route table and pick the first match, and in this instance it would be en1. Of course if the two routes have different metrics, then the metric of a smaller number gets the preference.

    But if you have both NICs connected to different subnets, e.g.

    192.168.1.0/24 via en1 metric 100
    172.16.1.0/24 via en0 metric 100

    Then it becomes clear which path to take when you try to get to your printer at 172.16.1.15 or your NAS at 192.168.1.100.

    The same thing happens when you have two (or more) vmkernel NICs on the same subnet. Separating the subnets will ensure that the desired traffic takes the correct path out.

    Interface binding

    Some may wonder why can’t interface binding be used, similar to running ping -I <intf>? The answer is yes! Interface binding is used for Multi-NIC vMotion (5.1 and newer) and for the Software iSCSI initiator where iSCSI multi-pathing requires two or more different vmknics within the same subnet. But this works only with vMotion, the Software iSCSI initiator, or other specific ESXi services designed to have NIC binding.

    To allow NIC binding to work, the general requirement is that only one active physical NIC can be present in the NIC teaming configuration, e.g.

    // vSwitch setup
    vSwitch0 = eth0, eth1
    vSwitch1 = eth2, eth3
    vSwitch2 = eth4, eth5
    
    // No port binding
    vmk0 management 192.168.1.11/24 via vSwitch0 (active: eth0, eth1)
    vmk1 iscsi-hb 172.16.1.11/24 via vSwitch1 (active: eth2, eth3)
    
    // Port binding services
    vmk2 iscsi-1 172.16.1.12/24 via vSwitch1 (active: eth2, unused: eth3)
    vmk3 iscsi-2 172.16.1.13/24 via vSwitch1 (active: eth3, unused: eth2)
    vmk4 vmotion-1 172.16.2.11/24 via vSwitch2 (active: eth4, standby: eth5)
    vmk5 vmotion-2 172.16.2.12/24 via vSwitch2 (active: eth5, standby: eth4)

    vSphere 5.1 and iSCSI heartbeat

    iSCSI heartbeat which uses regular ICMP ping does not bind to a specific interface prior to vSphere 5.1. Back in the good old days, it was a best practice to create 3 vmknics and leave the vmknic with the lowest index number for iSCSI heartbeat to give it a routing priority. In vSphere 5.1 and later VMware addressed this and made iSCSI heartbeat bind to an interface but there have been reports of it not working as intended.

    vSphere 6.0 and TCP/IP Stacks

    VMware introduced independent routing tables (know as TCP/IP stacks) in vSphere 5.5 but it was cumbersome to configure via CLI. In vSphere 6.0 three different TCP/IP stacks are available by default so that Management, vMotion and Provisioning (cloning, snapshots, etc.) traffic can be configured to route differently. For the Cisco guys this is easily explained as VRF. This introduction allows vMotion and Provisioning traffic to be routed. Although not usually needed, vMotion routing will be required if you want Long Distance vMotion to get across two different (routed) subnets.