
When TDD Struggles: Fast-Moving Teams, AI, and the New Way
TDD was great for those who subscribed to the school of thought that coding should be a deliberate and final activity, following a prolonged thought experiment in one's mind and over a whiteboard, where every detail is designed to the tee, before carefully inscribing the code into the code editor. This meant that to a large extent, conceiving near-final tests and the implementation was straightforward in a way that TDD describes as effective.
The issue is not with TDD itself, but with the approach to programming one takes. When writing tests first, it's crucial to ensure that the design and test cases are thoroughly thought out, as they will guide the implementation. The hallmark of an excellent test is one that allows for flexible and scalable implementations, enabling orthogonal features to be added without modifying the test case. Parameterisation is a powerful technique in achieving this, as it allows for a well-architected end-to-end solution that can adapt to various scenarios.
However, over time, a new style of programming, and arguably a more effective one, has emerged. This is a style which is inspired by the startup-culture of build fast, break fast, the agile methodology favours iteration speed over careful composition. This is largely due to the difference between a programmer's mindset versus that of an entrepreneur.
In high-speed development environments with fast feedback cycles and tight schedules, the lifespan of initial code implementations is often uncertain, creating a conflict with Test-Driven Development (TDD) principles. To balance TDD with agile practices, consider iterative testing approaches that allow for flexibility and refinement as the project evolves. This might involve developing core tests for critical components while maintaining the agility to adjust or replace tests as the codebase changes.
What exacerbates this issue is the prominent rise of AI-assisted coding, which has allowed developers to output code at unprecedented speeds, which is a double-edged sword. The positive is that this allows developers to prototype and iterate on ideas without having to shoulder the implementation burden, and at the same time, is a negative in that it is likely to be sloppy, inefficient, and inelegant code for sophisticated problems and/or within larger codebases.
This makes it much more difficult to subscribe to TDD when it makes more economic sense to rapidly prototype and iterate on the direction of the feature, until some saturation threshold is hit where the implementation is more or less ideal, before refactoring into a set of perfect test cases and consequent implementation which can continue to shapeshift as demanded.
The new way forward I envision in three phases:
• Phase 1: Prototype First
– Lightweight smoke or integration tests.
– Rapid AI-assisted prototyping to hit a “saturation threshold” of feature shape.
• Phase 2: Stabilise with TDD
– Convert core behaviours into red-green-refactor suites.
– Use parameterization to keep tests orthogonal and future-proof.
• Phase 3: Evolve
– Guardrails for ongoing AI-driven iterations.
– Characterization tests for legacy code and boundary cases.
svar på indlæg