Test-Driven Development? Give me a break…

September 24, 2011

Update: At the bottom of this post, I’ve linked to two large and quite different discussions of this post, both of which are worth reading…

Update 2: If the contents of this post make you angry, okay. It was written somewhat brashly. But, if the title alone makes you angry, and you decide this is an article about “Why Testing Code Sucks” without having read it, you’ve missed the point. Or I explained it badly :-)

Some things programmers say can be massive red flags. When I hear someone start advocating Test-Driven Development as the One True Programming Methodology, that’s a red flag, and I start to assume you’re either a shitty (or inexperienced) programmer, or some kind of Agile Testing Consultant (which normally implies the former).Testing is a tool for helping you, not for using to engage in a “more pious than thou” dick-swinging my Cucumber is bigger than yours idiocy. Testing is about giving you the developer useful and quick feedback about if you’re on the right path, and if you’ve broken something, and for warning people who come after you if they’ve broken something. It’s not an arcane methodology that somehow has some magical “making your code better” side-effect…

The whole concept of Test-Driven Development is hocus, and embracing it as your philosophy, criminal. Instead: Developer-Driven Testing. Give yourself and your coworkers useful tools for solving problems and supporting yourselves, rather than disappearing in to some testing hell where you’re doing it a certain way because you’re supposed to.

Have I had experience (and much value) out of sometimes writing tests for certain problem classes before writing any code? Yes. Changes to existing functionality are often a good candidate. Small and well-defined pieces of work, or little add-ons to already tested code are another.

But the demand that you should always write your tests first? Give me a break.

This is idiocy during a design or hacking or greenfield phase of development. Allowing your tests to dictate your code (rather than influence the design of modular code) and to dictate your design because you wrote over-invasive test is a massive fail.

Writing tests before code works pretty well in some situations. Test Driven Development, as handed down to us mortals by Agile Testing Experts and other assorted shills, is hocus.

Labouring under the idea that Tests Must Come First (and everything I’ve seen, and everything I do see now suggests that that is the central idea in TDD – you write a test, then you write the code to pass it) without pivoting to see that testing is a useful practice in so much as it helps developers is the wrong approach.

Even if you write only some tests first, if you want to do it meaningfully, then you either need to zoom down in to tiny bits of functionality first in order to be able to write those tests, or you write a test that requires most of the software to be finished, or you cheat and fudge it. The former is the right approach in a small number of situations – tests around bugs, or small, very well-defined pieces of functionality).

Making tests a central part of the process because they’re useful to developers? Awesome. Dictating a workflow to developers that works in some cases as the One True Way: ridiculous.

Testing is about helping developers, and recognizing that automated testing is about benefit to developers, rather than cargo-culting a workflow and decreeing that one size fits all.

Writing tests first as a tool to be deployed where it works is “Developer Driven Testing” – focusing on making the developer more productive by choosing the right tool for the job. Generalizing a bunch of testing rules and saying This Is The One True Way Even When It Isn’t – that’s not right.

Discussion and thoughts (posted a few hours later)…

I wrote this a few short hours ago, and it’s already generated quite the discussion.

On Hacker News, there’s a discussion that I think asks a lot of good questions, and there’s a real set of well-reasoned opinions. I have been responding on there quite a bit with the username peteretep.

On Reddit, the debate is a little more … uh … robust. There are a lot of people defending writing automated tests. As this blog is largely meant to move forward as being a testing advocacy and practical advice resource, I’ve clearly miscommunicated my thoughts, and not made it clear enough that I think software testing is pretty darn awesome, but I’m put off by slavish adherence to a particular methodology!

If you’ve posted a comment on the blog and it’s not there yet, sorry. Some are getting caught in the spam folder. I’m not censoring anyone, and I’m not planning to, so please be patient!

Anyway, the whole thing serves me right for putting together my first blog post by copy-pasting from a bunch of HN comments I’d made. The next article is a walk-through of retro-fitting functional testing to large web-apps that don’t already have it, and in such a way as the whole dev team starts using it.

{ 92 comments… read them below or add one }

Jason Hanley September 24, 2011 at 8:08 am

Nicely said. Clearly the voice of experience.

Tests are only valuable if they're well thought out, and actually give more benefit than their cost.

I've seen way too many examples of automated tests that cause more problems than they solve.

Reply

Dawid Loubser September 24, 2011 at 8:09 am

You seem to live in a world of hacking together toy software. One day when you become a software ENGINEER who has to build complex, long-lived software as part of a team of people, following modern engineering practices (model-driven development, design-by-contract), upon which people's well-being depends, you will change your tune, I suspect. As son as you have DESIGNED a component (at any level of granularity) you can derive a good set of test cases for it using established techniques developed by the testing community over decades. But you don't design, do you? Your strongly-worded hissy-fit of a blog post suggests that you are far from being a professional, and I hope you're not writing any important software (for the good of mankind).

If you ally wanted to attack test-driven development, you could at least have taken the time to learn what it's all about, and could have tried to construct halfway-decent logical arguments against the (alleged, according to you) benefits. Instead, it seems you had a really bad day trying to write some complex code, and instead of kicking your dog, decided to add this giant fit of misunderstanding to the world.

Perhaps you should focus on the skills that good software engineers really are made of. (Hint: it's not programming.)

Reply

Jeffrey Aguilera June 21, 2012 at 7:17 am

What an arrogant ad hominem response.

Reply

Jason Glass July 10, 2012 at 3:14 pm

haha! i agree completely with jeffrey aguilera.

Reply

Tom R October 14, 2012 at 12:55 pm

Also think Mr. Loubser’s comments are uncalled for, and show him in a poor light … NOT the writer of the original article, with which I completely agree. Because our industry is being taken over by Agile and TD, and because they are either irrelevant or a hindrance to most of the software development that I do (NOT “toy software” Mr. Loubser) I am thinking of getting out of programming altogether. They are taking all the fun out of it, and preventing good programmers from exercising their skills honestly.

Reply

Anon January 18, 2013 at 8:43 am

I hope you will stop programming altogether, the world would be a better one without the ones like you.

Tom R April 11, 2014 at 10:51 am

Anon knows nothing about the software I have developed in over 30 years in the industry in which I wrote APPROPRIATE tests and was not hamstrung by some semi-religious requirement to always write tests first. Applications included flight control systems, oxygen lance steelmaking, factory automation, applications running on 10′s of thousands of ATMs, operating software enhancements etc.). Not exactly applications where the occasional glitch can be tolerated. As he also hides behind “Anon” I will treat his pathetic comment with the contempt that it deserves.

Nucc February 22, 2013 at 11:47 pm

But it’s true… I guess the author’s biggest software was not more than 10.000 lines of code, and he has never worked with more than 10 people on the same application.

Reply

Jerry March 28, 2013 at 5:59 pm

Not worked with more than ten people? Who the hell wants a team that even APPROACHES this size. I agree with the original author. I’ve been a developer since 1983 and I’ve seen TDD and XP completely destroy entire development teams because it becomes a complete cult religion rather than a respectful new use of tools to get a job done. You know what your mommy said. Take the middle way. Extreme anything is just asking for trouble. Personally I hate TDD, and my software seems to run just fine without it. I don’t mind unit tests and code reviews, but I don’t need your friggin TDD diapers, thank you very much. Besides, XP came out of a FAILED payroll software project. Guess they were so busy “refactoring” and building tests that they couldn’t get the project finished.

Reply

Adrian O'sullivan April 21, 2013 at 3:31 pm

“A survey of 1,027, mainly private sector, IT projects published in the 2001 British Computer Society Review showed that only 130 (12.7 per cent) succeeded. ”
http://tinyurl.com/brmx3xg

Maybe you have always been in that 12.7%, but many across the industry notice that software projects are often failing. TDD was always really an attempt to avoid this. Perhaps the TDD you have seen was not practiced well? That is the main reason I have seen for it not working on projects. When practiced well, I have found it very (not ‘extremely’!) useful.

jack October 20, 2012 at 11:47 am

Lol.

Reply

chadastrophic September 24, 2011 at 8:09 am

Great article, one of the greatest programmers Ive known is an advocated of TDD, but is also smart and balances reality with best practices. I really enjoyed the honesty in your article. Thanks! My colleague who is a strong advocate has made me consider this paradigm more seriously and its great to be knowledgeable when and how to apply certain approaches.

Reply

Micha September 24, 2011 at 8:39 am

I think in somepoint your right but
you can write TDD as Feature lists this i think is a good thing and iam using it often on web Project cause i first think what i wanne do before i do it. Often Programmers do something write shitty code and then they are to lazzy to fix it… this is not a good programming style

sry for my bad english

Reply

Aleksey Korzun September 24, 2011 at 10:06 am

Dawid Loubser,

Using TDD does not instantly make you a superior engineer and everybody else a toy software maker.

You are just backing up authors point by being full of your self just because you use specific development approach that works for you.

Reply

Rafał Rusin September 24, 2011 at 10:13 am

I agree with the idea of the article. We don't need TDD Palladins. It's usually a bad idea to start from a test case when you develop new piece of functionality and have little idea of how it will look like when it's done. On the other hand, I prefer one unit test, which is well thought and tests actual functionality, instead of 10 tests for getters and setters (which I've seen in some code). TDD if goes mad, is a monster.

Reply

DanCaveman October 23, 2012 at 9:16 pm

Rafat,

I am not a TDD expert, but I am working to become a ninja :). I really believe in the ideas and have seen it solve many problems. I found your comment interesting because TDD has helped in the exact situation you describe as being “a bad idea”. When you have new functionality and little idea of how it should work is exactly when TDD is great. It stops “what if” questions, by focusing what what the goal of the functionality is as well as flushing out an API without aimlessly coding features that end up not being needed.

It focuses work by picking of specific pieces and functionality that you KNOW you need and leaves any of the ambiguity to be solved late.

Reply

Jerry D March 28, 2013 at 6:10 pm

The only reason to create a bunch of code you never end up needing is because you didn’t take the time to think of what you only need now, and then limit your API to that. TDD is not a good replacement for lazy thinking or not taking the time to DESIGN something. Developers need to put themselves in the consumers’ shoes, take some time to DESIGN a system, and stop looking to magical pop cult fiction to do their design work for them. One of the reasons XP and TDD are often lumped together is that XP is just sloppy rushed “constant refactoring” programming that requires the testing to make sure you’re not checking in junk code. It’s like saying I want to provide the baby (XP) and the babysitter (TDD) instead of just being an adult developer, taking the time to THINK about design and consequences, and following it up with code reviews and unit tests. I despise TDD.

Reply

zhangyang March 28, 2014 at 7:31 am

There is a question , TDD make us focus on the feature . So we just finish code work today .If tomorrow some new feature is needed . We need lots of time to refactoring .

Reply

Tom R April 11, 2014 at 11:07 am

Rusin is right. We do not need TDD Paladins. In fact we do not need any of the evangelists that think the latest fad, which they have adopted enthusiastically, is the ONE TRUE WAY. All of these ideas have some value, but to force them onto everyone, and to look down on anyone that does not share your commitment and enthusiasm to whatever is “flavour of the month” is ridiculous. It is like throwing out all the tools in your home maintenance toolkit except one, and whether you keep the hammer, the wrench, the screwdriver, the craft knife or the saw, it will be just right for some tasks, tolerable for others, and downright useless for most.

As a software engineer I want a selection of specialised development tools, from which I can choose the most appropriate. Sometimes that will be TDD and/or Agile. Often it will not.

It is a delusion of management that if only they could enforce the “correct” ways of working they would produce great software from mediocre programmers. They will not. The most important things you need to create good software reasonably quickly are:

An understanding of the problem domain
Expertise in the programming languages, other tools and supporting environment required
Design expertise (including the ability to anticipate unintended consequences)

and last but not least:

Caring enough about what you are doing to make whatever effort it takes to get it right

These are what really matter. Everything else cis much less important.

Reply

Unknown September 24, 2011 at 10:24 am

TDD rocks, its just a matter of how its implemented and using it when it fits.

Small-team projects doing RAD are normally greatly benefited by TDD.

Reply

zhangyang March 28, 2014 at 7:38 am

I agree with you .

Reply

foobardude September 24, 2011 at 10:32 am

Dawid Loubser: a man of zero substance for an argument.

Dawid – experienced teams often don't write tests because we have to make money. We hate it, but we need to get a check. TDD is a fun exercise but isn't really practiced often. Yet when attempted you have a lot of tests that are good to go.

Don't be such a douche, chill out and just read what everyone else is saying. And when you attempt a counter point, back it up with real facts and not sound like the agile consultant the author was ripping on.

Reply

oliverclozoff September 24, 2011 at 10:32 am

> because you wrote over-invasive test is a massive fail.

It's failURE.

Reply

hacksoncode September 24, 2011 at 10:50 am

You know, the biggest thing I've always wondered about "test driven development" is "what process do you use to develop your tests?".

Reply

zhangyang March 28, 2014 at 7:44 am

I think tests test the real code , and the real code test tests.

Reply

James September 24, 2011 at 11:11 am

Aleksey,

Hating TDD with a passion doesn't make you one either, and advocating TDD doesn't make you a bad one. Tools and methodologies are nothing more than tools and methodologies. You can have a major preference for one, but if you're a good developer/engineer, you can adapt to whatever is being used.

Before I jump in here, I'll mention; my group doesn't advocate TDD, but it does require unit testing in some form. But let's see here:

Peter,

> Allowing your tests to dictate your code (rather than influence the design of modular code) and to dictate your design because you wrote over-invasive test is a massive fail.

Yep. Doing TDD also means that you actually have to be good about writing testable code, and writing -good- tests. You're applying TDD over the top of other software engineering best practices. If you don't, you're just going to wind up shooting yourself in the foot. Not using TDD but writing over-invasive tests is also a massive fail – it has nothing to do with TDD.

Let's say I'm writing a server which reads data from two sources, performs some complicated data munging, and returns some answer. Simple tests for your DAOs, write the DAOs. Nothing too invasive so far. Write tests for your data munging, and implement the munging algorithm. No over invasive tests, so far, and nothing has dictated my design. Each piece is logically going to do what it's going to do. Finally, the overall server tests, and the server itself.

If you take the other directional approach, you write your tests for the server, mocking out the algorithm (meaning you don't have to write the rest yet, so long as your mocks obey the contracts of the algorithm class), etc.

> Even if you write only some tests first, if you want to do it meaningfully, then you either need to zoom down in to tiny bits of functionality

That's not really a bad thing. It's sort of the point of unit testing in general – you don't know that your higher level components are working unless you know the lower level ones are.

> write a test that requires most of the software to be finished, or you cheat and fudge it.
If you have software with well defined APIs, "fudging it" is fine. I can drop out my ExpensiveBullshitAlgorithm with a mock which returns 3 when the inputs are 1 and 2. Those are the expectations of the system, and we'll prove that ExpensiveBullshitAlgorithm actually returns 3 for inputs of 1 and 2 when we write the tests there. This is not "cheating", this is "mocking", and it's only something you can get away with if you're actually writing solid tests for your components, and they obey defined APIs.

> Generalizing a bunch of testing rules and saying This Is The One True Way Even When It Isn't – that's not right.
Consistency is important in large group projects. If half the team is working in one way, and half the team is working in another, you ARE going to clash. It's not necessarily the One True Way – nothing is. But in the real world there is quite often the One Way We Decided On For Consistency and You're a Big Boy/Girl So You Can Adapt, Right? And once you get used to it, you might even like it.

Reply

Joe Magly September 24, 2011 at 12:33 pm

Everything has a balance, the problem I see in some shops is they tend to take the chosen practice to one extreme or the other without balancing actual need or value.

Testing helps, writing tests first or last I feel is inconsequential to the ultimate goal of the tests if you really are doing true unit testing (some shops only think they are unit testing). Writing do nothing tests that just improve your coverage ratio are not worth the time. Spending an extra hour on that method with the complex object as an input parameter to cover more negative cases would be more valuable.

Reply

Ben Smith September 24, 2011 at 12:46 pm

Use the right tool for the job! If you have a large, complex codebase, TDD is the ONLY sensible way to keep things manageable while you change fundamentals. TDD makes little sense for a 200 line project knocked off in an afternoon, or anybody using the waterfall method.

Reply

Tom R April 11, 2014 at 11:10 am

To say that anything is “the ONLY way” reveals a paucity of imagination.

Reply

Rush September 24, 2011 at 1:31 pm

We'll build it, get it working, then we'll design it. <<<—- 85% of the projects I've ever worked on.

Reply

Cody September 24, 2011 at 1:48 pm

Great article. I don't care whether you like TDD or not, just don't be one of those douches who regurgitates all kinds of bullshit about how great some method is.

I, personally, like to test after I'm fairly sure of the design. When I've devised my overall algorithm and determined the interfaces and implementations, then I'll write tests as I go so I can safely refactor as I work.

Also, TDD for interfaces like a web page is just plain bullshit. When you do TDD for integration tests on a web application, you're just wasting your time. Test the hell out of your unit tests, write functional tests as you understand the problem more and you become more sure of the solution, and if write interface tests once you're done and you're concerned with them breaking during deployment.

Reply

Vic September 24, 2011 at 2:02 pm

TDD is for http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

Integration tests help, ex: browser to socket server to db during CRUD.
The rest are just a religion.

Reply

WildThought September 24, 2011 at 3:08 pm

Every methodology can be taken to an extreme. RUP, Waterfall, and I think TDD is especially prone to it.

It reminds me of the relational database modeling vs. OOP debates during RUP's heyday. What do you do first, data model or class model. I would argue it doesn't matter as long as you get the same result. Their are rules that ORM's have figured out how to reverse and forward engineer between each other. If those rules exist, then why can't we model either way first.

Similarly whether we write tests first or code first should be the purview of the person writing the program. Do I really want to write a test first for every stored procedure I write. Of course, I do not. I am sick of seeing articles on how to do TDD in the database layer. I am equally sick of having to create my own mocks to mimic a layer I can get to in ms.

I think it has its place, but it does promote coding bloat. Now, RUP promotes documentation bloat. I truly believe, that quality software engineers (aka master craftsmen) can choose what methodologies to draw from as needed. To say, I have the way and its the only way is telling me that you are afraid to think outside your own box.

Reply

Maht September 24, 2011 at 3:13 pm

Generalising is fun. We call it "having an opinion".

Reply

Dave Thompson September 24, 2011 at 4:03 pm

This comment has been removed by the author.

Reply

Dave Thompson September 24, 2011 at 4:07 pm

"writing tests first or last I feel is inconsequential"

The point of TDD is that writing your tests first forces you to use your code before writing it, which in theory leads to better designed, simpler interfaces. The tests then serve as the formal specification for your interface, which often leads to easier and quicker implementation of your interface. Since your code's specification is now being tested, it is very easy to prove to stakeholders that your code works as intended, and is often easier to change when stakeholders change their minds. If you write your implementation first, you may not realize until later down the road that your interface is awkward or difficult to use, and by then it takes more time to fix it.

TDD is not always necessary or even the best way to do things. TDD is probably overkill if you're working on a simple CRUD form with no logic outside of validation and persistence. TDD's advantages show themselves quickly when working with a technology or business domain that you're not experienced with, when you're working with complex systems, and when you're creating public apis. In these cases, TDD helps get your design correct the 1st try, and saves a lot of time. In addition TDD has many advantages when working with a large team. Any time 'wasted' writing tests is more than made up for by elimination of technical debt and time spent refactoring or fix bugs.

Reply

Darren September 24, 2011 at 7:23 pm

Something that should be kept in mind about TDD is that nobody expects you to do it all the time — even its most staunch promoters. Full test-first code is an ideal, as something to be worked for. Whether you will reach 100% is dependent on a lot of factors, like skill, time, understanding, tools, your framework and language, coworkers, etc.

But if you don't hit 100%, you don't throw up your hands, curse the method and write an angry blog post about how TDD upsets you.

Instead, you say: "Next time, I'll do better." And you do.

That's the difference between test driven development and your "developer driven development." TDD is a method of producing tested, working code, it takes a long time to master (I'm not even there yet), and it's an ideal that its practitioners work towards. Your DDD is a method that says that whatever "works" today is fine, whoever you are and whatever you do today, and testing is nice so long as it's in some form before or after the code is written. Kinda vague…

But since you mock TDD as the "one try way," but I have a question for you: If you're not able to write simple test cases for all of the code you write, even before that code, how can you be satisfied with yourself?

Reply

Dawid Loubser September 25, 2011 at 3:46 am

Let's say you make a statement about your component or system, such as "under circumstances X, given input Y, it will produce Z". One of only two truths apply: Option A: you make the statement based on the belief that your code (which other members of your team have perhaps modified) is sound, or based on experience. Let's call this "faith". Option B: You make the statement because there is a unit test that proves it ("proof"). In other fields of engineering, things are not built based on faith.

Unit tests, at every level of granularity, are the only way to prove that your system works. Anything less fosters a self-important, "code ownership", hacking culture, and virtually proves that you are coding without having performed any real design.

Anybody is free to follow this style of work, but in the 21st century, this is thoroughly amateur, in my opinion, and suited only to toy software. Are you really willing to bet your job, and the experience of your clients, on faith?

Reply

SomeGuy February 19, 2013 at 8:11 am

Ok. I have a sensor. I want to write a parser that parses the data from the sensor. TDD would say, write a test that mimics a message described in the protocol manual, and test that the parser would parse the message correctly.

So I write the test. I write the parser. The parser passes the test. And now I can merrily hand that code off, and the world is right as rain.

But there’s a typo in the manual, and the firmware in the sensor kicks out an extra tab character at the start of the message. How did I discover this? By hooking the sensor up to the parser and doing a live test with the real hardware.

So what did I gain? The code never really “worked” until I tested it against the actual sensor. So then I modify the parser and the test to deal with the extra tab character… but… since I’ve already got the parser working with the real sensor, the unit test is redundant, and possibly a source for more failures to be introduced, since more code is being maintained.

Now, if I had tested the parser against the live hardware to start with… at least I wouldn’t have had to fix the redundant unit test.

So you say your unit test is “proof.” I say a unit test is where you represent your “faith” in code.

Reply

Peter Sergeant September 25, 2011 at 3:50 am

Dawid,

Your comment is on the money when you point out that testing as a developer is hugely important. That's really what this blog is/will be about.

I'm not sure what your comment has to do with the methodology of Test-Driven Development, which is the specific idea that you must write a test for the piece of code you're working on BEFORE you do anything else.

Reply

MononcQc September 25, 2011 at 6:38 am

I think you make a mistake by thinking that prototyping and testing are mutually exclusive. Of course it's useless to write tests when you don't know the specs of your software and what it should do. But then again, why should you write any production software without knowing this?

Prototyping is a[nother] tool to help find out the specs of programs you end up writing. Tests are a way to write those specifications, force you to think as the user of the code rather than its writer. Then this is turned into runnable code to be used as a guideline. That you have tests is more or less accidental in the process.

To me, the best argument about writing tests first is that writing tests last is absolutely boring. Most of the time, it's a half-assed, useless job. Writing tests first is the only way to make it somewhat fun.

Reply

Javin Paul September 25, 2011 at 7:48 am

I agree with your first statement no matter how good a practice ,process or technology is its not ultimate solution.Test driven approach has its own advantage but its not perfect for every scenario. My experience says its flexibility and hybrid nature which gives you option to use agile, waterfall or test driven based on needs and suitability of situation , resources and environment.

Reply

Dawid Loubser September 25, 2011 at 12:14 pm

Earlier, I presented the argument that we require unit tests at each level of granularity in our system, to ensure quality and consistency (which is, after all, what we strive for, right?). Nobody has presented a counter-argument, so let's assume this for the moment, and discuss Test-Driven Development (upfront unit tests):

First of all, I don't view TDD as a development methodology in itself, but rather a "technique" (not unlike, say, design-by-contract) which can be used in many development process methodologies, together with other techniques.

There are several overwhelmingly compelling reasons to write one's tests first:

- It enforces a deep understanding of the contract ("requirements") of the component to be written by the developer, which in itself enforces that requirements analysis / design actually be done. How many teams jump right ahead and start coding, only to have to refactor later? Put the overheads where it belongs – requirements analysis.

- It greatly speeds up and simplifies the development process – for the developer now knows precisely when he is "done" (there is no uncertainty, no unnecessary work is done, but nothing is left out). Of course, this depends on having "good" tests (sufficient coverage) – which is a separate and complex topic itself.

- In technology-neutral metalanguages like URDAD or UML, we can express the "dynamics" of requirements sufficiently, but few programming languages (other than WS-BPEL Abstract, which is a bit dead in the water) have artifacts that can express such requirements. Take, for example, a Java interface or WSDL contract: They express only a small part of the requirements. The test suite becomes an essential artifact to express the dynamics (interactions) of the requirements. We should express the requirements *before* implementing them, surely?

- One's framework is then already in place for test-driven bug fixing. Got a bug? Prove it with a unit test. Once proven, fix it (which you know, once your test passes), with the assurance that the other 20 unit tests prove you haven't broken anything else. Nobody is going to start putting those 20 tests in place when under pressure to fix a bug. Luckily they can already be there, and your world does not spiral out of control in a frantic mess of complexity.

Test-driven development introduces a degree of precision, control, and simplicity to the development team that is profound. Of course it's more work, and requires much more insight from the developer.

Two things have held true in the decade or so that I have been teaching this to developers though:

Firstly, absolutely everybody is opposed to the supposed "overheads" of this process: "But we have deadlines!" "We don't have time!"

Secondly, every last developer that adopts the test-driven development process (not because they "have to", but in their hearts) rises to a new plateau of understanding – they speak a different language when it comes the coding, and approach problem solving differently. All of the sudden, it's obvious why other engineering disciplines do things this way as a matter of course. And us software engineers have infinitely better tools than our distant relations in mechanical, chemical, civil engineering – We can automatically build and break components at zero cost!

They never go back to this uncontrolled hacking that most people call "software development".

Reply

Attila Magyar September 25, 2011 at 2:41 pm

TDD is not "The One True Programming Methodology", but clearly one of the best if you are doing OO design.

At my previous workplace we were doing a lot of TDD, and at first I didn't like it at all. Later I recognized that we completely misused the whole methodology. The problem was that we didn't allow the test to influence our code, which resulted both unmaintainable code and tests (lots of hacks in the test because the code was not unit testable (and not reusable, not flexible). So insisting on unit testing, but not allowing to change the code because of the test, is two incompatible mindset, which will result lots of stress in the test.

Later I started to get into in TDD more deeply and I learned how can I "listen to the test" and alter my design because of it. Overall, it helped me to understand OO in a better way. (At that time I did not consider myself as an unexperienced programmer, and I though I already know everything about OO, but it was not the case).
So, now I consider TDD as a design process which helps me to develop good OO design.
(I emphasising the OOP intentionally, because I started studying functional programming recently, and I still not sure whether TDD offers the same benefit in that world or not).

Regarding TDD, I recommend a book called: Growing object oriented software guided by tests

Reply

Mojo September 25, 2011 at 10:54 pm

This article isn't written "brashly", rather its written with not much merit.

First three paragraphs the author is just trolling.

Fourth paragraph you admit to doing TDD is good sometimes.

Fifth paragraph and you are just taking things too literal. Not all development should be TDD. API Discovery, testing the waters, is okay to not being TDD. There is also other things that cannot be TDD like GUI related work. Also this is not a huge portion of coding so don't hold on the the 20% of development and edge cases and claim TDD sucks based on those.

Sixth paragraph, you again say TDD is good sometimes, then you troll again.

Seventh paragraph, you don't back this claim up.

Eigtht Paragraph….oh fcuk it, I give up…this is useless.

Reply

Antwan "@ADubb_DC" Wimberly September 26, 2011 at 8:35 am

Great post. For those that are caught up in the hype and want to imply that you some how are not a good coder or have poor design skills because you don't strictly follow TDD, you're delusional and you're in severe need of help. Building software is all about trade offs. There is no ONE AND ONLY way to program that beats all. It's all about what you think is best for you and or your team. For example, you can justify all your decisions by starting each sentence off with "But Martin Fowler said____"….or you can grow some balls and make a decision for yourself. Stop letting people control your lives as a developer and go against the grain sometimes!! You guys crack me up. Calling this man out because he has the balls to admit that sometimes he doesn't think adhering to the process to the T is worth all that it's portrayed to be. What a buncha chumps!! Even Jeffrey Palermo says he and his crew at Headspring don't always write their tests first. They just make sure their tests are committed at the same time their code is. It's good to do initially, but after a while you just know what the hell you're doing. I'm not saying don't write tests…they're helpful as hell…no doubt. But writing them FIRST EACH AND EVERY TIME…naaaaaah!! And that word "trolling" cracks me up. It's become synonymous with…"this guy has the balls to provide a counter argument for a widely supported practice". Get real people!!

Reply

Dawid Loubser September 26, 2011 at 1:03 pm

Antwan, the original author does NOT present a counter-argument – that is the point! If anybody can present an argument (i.e. a conclusion based on logical premises) that test-driven development *reduces* quality or productivity under *any* circumstances, let him come forward.

As it stands, it reads a little like "sometimes I just don't feel like producing good quality work, because it's too much effort. I think striving for good quality sucks."

Do you also sometimes say: "Designing things before I build them EACH AND EVERY TIME… naaaahhhh!!" ? Where does it stop? "Satisfying the customer's requirements EVERY TIME? Naaahhhh!!. Writing code that compiles EVERY TIME? Naaahhhh!!"

Good heavens, man, what kind of software do you build? You are the one that has to "get real".

Did you even read my prvious post? Can you present a counter-argument to any of my statements?

Reply

Pablo February 4, 2013 at 2:49 pm

Here’s a counter argument that shows how TDD is not better (in fact, is worst in terms of quality code produced and productivity) than Test-Last:
http://theruntime.com/blogs/jacob/archive/2008/01/22/tdd-proven-effective-or-is-it.aspx

- The control group (non-TDD or “Test Last”) had higher quality in every dimension—they had higher floor, ceiling, mean, and median quality.
- The control group produced higher quality with consistently fewer tests.
- Quality was better correlated to number of tests for the TDD group (an interesting point of differentiation that I’m not sure the authors caught).
- The control group’s productivity was highly predictable as a function of number of tests and had a stronger correlation than the TDD group.

Reply

Mauricio Aniche September 27, 2011 at 8:08 am

I study, practice and research in academia about TDD for a long time. I can say that I am a TDD evangelist. I do believe that TDD makes such a difference in my development environment.

However, you can't "Always" and "Never" in any software engineering context. If someone still thinks that TDD, or any other practice should be done 100% of the time, s/he is wrong.

TDD is a tool as many others that we have. You should use whenever you need it. It is up to the developer to identify moments that he needs to use TDD and moments that he does not need to use. This is what I expect from an experienced developer.

Reply

Dawid Loubser September 28, 2011 at 1:58 am

Quote (Mauricio Aniche):
> However, you can't "Always" and "Never" in
> any software engineering context. If someone
> still thinks that TDD, or any other practice
>should be done 100% of the time, s/he is wrong.

Of course you can! One can logically argue that Test-Driven Development will *always* produce higher-quality output. Of course, higher-quality means slower, more expensive, and requires a stronger class of developers.

One should thus not say "one must always follow test-driven development" just like one should not say "one must always pursue the highest quality".

But if the decision is indeed made to pursue quality – and with very complex software projects and small teams, this is a good idea – nobody has yet presented an argument that test-driven development will not always result in higher quality.

Reply

Jeremy Stark July 9, 2012 at 6:14 pm

I’m late to the game here but wanted to comment on this post, having read your previous post as well.

I emphatically disagree with your statement:

“Of course, higher-quality means slower, more expensive, and requires a stronger class of developers.”

Higher quality code means exactly the opposite of this statement. Slow code is easy to hack at but hard to get working with any degree of certainty. Quality code is so easily understood as to clarity of purpose and fitness for release that extending it is quick and cheap. quality code tends to lend itself to new and exciting feature development in ways that poor code makes impossible.

My mantra to software teams is that “you learn what you do”. What a team chooses to do can either serve as a building block to higher performance (better code faster) or lead to stagnation or worse… toiling. The insidious thing is that from many (ignorant) perspectives high performance, stagnation, and toiling can all look and feel the same. They all require the same amount of effort. The difference is in the return for the effort that each provides. Until a team has experienced high performance they are unable to asses their current state accurately. But once things “click” there is no going back without a fight.

A team that is producing poor quality code will likely feel overworked, unclear as to requirements and exhibit cynical and defensive behavior.

Note to Dawid Loubser, I’m being facetious. I agree with your perspective on testing. :) Though I do disagree with quality = slow.

Reply

P@blo September 28, 2011 at 7:10 am

Great article! The article's point is quite clear IMHO, but the way it points out what we are doing wrong is a bit harsh.

I can't agree more with the purpose of automated testing, that of helping you to develop good software. It is of no avail following a cook recipe in the wrong context.

Also, as Hanley, chadastrophic and Magly say you should always strike a balance between benefits it provides and costs, for example when doing a webapp, it is not the same as doing software for a pacemaker.

Reply

reality-analyst October 1, 2011 at 10:45 am

Dawid Loubser,

There are plenty of counterarguments possible (and necessary).

First, there are many ways of writing correct code and insuring that already written code is correct. Only a fraction of those involve unit tests. Claiming that TDD is the best practice without comparing (or even knowing) about other practices is pure arrogance.

Second, it is perfectly possible to write horrible code while having 100% unit test coverage. Your code can be unreadable, hard to navigate and overly complex, for example. Another common illness of TDD practitioners is the tendency to sweep bugs under the carpet, moving them to config files and the database. Yes, yes, your code is perfect and super-configurable, but if your application fails to reliably work in real life, I don't really care.

Needless to say, these are not theoretical issues. I'm speaking from experience.

Finally, it's absurd to pretend that "real" engineers always deliver or even want to deliver 100% correct code. If your hardware fails in 1% of all transactions, is it reasonable to attempt to fix some software issue that affects 0.01% of all transactions? What's the cost of fixing hardware? What's the costs of fixing software? What's the cost of failures? That's the kind of reasoning I would expect from a real engineer.

Reply

Tom Dane July 30, 2013 at 9:46 am

That dude is not a real engineer, he is just a fanatic nuthugger.

Reply

Eternally Lost October 3, 2011 at 8:11 am

Frankly, most programmers are scared of writing tests, and unfortunately TDD many times has been presented as a rigorous discipline or worse, dogma. This just scares them off even more. While I'm actually an advocate of pragmatic TDD (or even TFD where it makes sense), the real value in TDD is the rapid feedback you get. It reduces the mental load on a developer (and in my professional opinion can produce better designs). For a moment, don't think of the unit test as a test. Instead, think of it as a "mini prototyping environment" where you can quickly write a piece of code, immediately execute it, and see the results. If viewed this way, pragmatic TDD can become a very liberating exercise as you feel less constrained and more willing to explore.

Reply

Timmy March 5, 2012 at 1:38 am

Enternally lost said it well when he/she said think of it like a mini prototyping environment. I remember back to when me and other devs would write up little console apps to test out something, but wrtting as a test is the way to go.

Where it gets hard is when you have to mock! It can get very painful and I have seen first hand from a top TDD guy that it may show green in the test but once its in produce you can still get bad results!

Reply

Scott May 11, 2012 at 6:55 pm

I don’t think this is an argument against TDD at all, nor is it specific to TDD. Cargo-cult ing is bad. When using any tool/methodology, understand why you are using it first. Never believe anyone who speaks of the “one true way” in any context. Problem solved.

Now, use TDD (judiciously) to write better code faster.

Reply

Aaron B. July 9, 2012 at 11:56 pm

I’m wondering if you ever wrote that article about testing large web apps, because I’ve been thinking about how to do that myself. I can understand how TDD, or unit testing in general, works well with modules and functions, but I’m having a hard time seeing how to do it with a web site. When a programming change breaks a web page, it’s often not that any function failed, but that a tag change screwed up the way the divs lay out or something like that. It seems like it would take awfully complex automated tests to ensure that a web page “works” with enough confidence to make manual checking unnecessary. Or is that not the kind of functionality you’re talking about testing? Thanks.

Reply

Terra August 13, 2012 at 4:36 pm

Through experience I have found that TDD seems to have turned off peoples heads to the wider design scope, and code normalization has pretty much evaporated. The devs practising it religiously and waving it like a flag to “prove” they are the best engineers seem also to produce massively less software and the quality seems to be no better (if anything bug lists seem to be getting longer – as devs rely on green lights to tell them their software is working and not by running it). Production bugs are becoming more common as people rely on tests to tell them everything is ok and being given false positives.

I believe it has become a crutch for weak developers where they can not be blamed for their poor code as long as all the lights are green. Using it as a mark of the quality of a developer is rediculous in of itself, I have met both good and bad developers working under TDD and they were just as good or bad before using it. I think there seems to be a miscomprehension in the latest batch of developers on what acceptable quality is. Some of us have been building complex enterprise systems for over 13 years and have never had a bug list that stretched to more than 20 or so items and the software has done precisely what it was created to do. Developers used to be more like surgeons, you didn’t make mistakes, you didn’t need 1000s of tests because the only faults were usually anomolies (probably the same kind of problems automated tests seem to miss). Now it seems you just keep bumbling forward randomly until you get the right results.

Sorry for all those people who think TDD makes them elite, but it certainly gives the rest of us a good laugh along the way. Trends suggest software costs and failure rates have risen since Agile, Scrum and TDD became more prevalent. Which doesn’t particularly surprise me. Try getting it right first time a bit more often and you won’t have to rely on gimics to keep your egos inflated.

Reply

kevinmisc August 22, 2012 at 2:30 pm

I think the man practicing the methodology is much more important than the methodology itself.

Reply

Julian September 17, 2012 at 9:39 am

I still don’t get how TDD can work… how do you develop your tests before you develop your tests? it is a lame concept for lame programmers who want to pull the wool over the eyes of non-technical types.

Reply

Jeremy Stark September 17, 2012 at 2:17 pm

Julian,

I’m assuming you meant to say “how do you develop your tests before you develop your _code_.” Your current statement doesn’t make any sense.

But the way you develop a test before you write code is to know what it is you want the code to do and then test for that condition. You then satisfy the condition so that the test passes. If you have ever written unit tests _after coding_ it should be a short leap to understanding how TDD works.

Without TDD you write your expectations after you code. If you think about it this is quite backwards. It is subject to all the follies of the human tendency to rationalize decisions and behavior. With TDD you write your expectations first. This focuses development on solving an incremental set of problems while shielding the developer against regressions (as you have a battery of tests relevant to your current efforts).

Reply

Julian September 17, 2012 at 10:24 pm

nope, I meant tests, not code – of course you can change develop twice to code if you like… assuming you are going to code your tests (ie: develop them) how do you test before you develop? Of course I am assuming as per all the examples where the tests were “coded / developed” as opposed to using a generic 100% code free testing Tool that is very restrictive and dedicated to a specific language / environment.

Reply

Tom R September 19, 2012 at 9:57 am

All of this debate about TDD (and about Agile, OO vs Procedural code etc. etc.) diverts time and attention from the three things that really matter, and make the difference between a skillful developer and a so-so developer:

1. Understanding the problem domain
2. Understanding your programming tools
3. caring about doing a good job

It is lunacy to imagine that if the “methodology” and development process are right they will somehow guarantee that mediocre developers produce quality software.

Reply

shalomshachne November 9, 2012 at 7:36 pm

“Making tests a central part of the process because they’re useful to developers? Awesome. Dictating a workflow to developers that works in some cases as the One True Way: ridiculous.”

I think the point of Test Driven Development is entirely that it’s useful to developers. If you’re arguing about whether writing tests after coding is just as good as writing before, then I would agree, that I wouldn’t really care, *as long as the tests get written*.

In practice, what I have seen with people who write the tests after, is that they often don’t do it, and after means never. And frequently they check in code with flawed code paths (that were never tested), and then throw it over the wall to QA and hope that someone doing manual testing will catch problems with the code.

Who determines whether tests help the developer or not? Developers who have difficulty writing tests (or getting into the Test First mindset) will argue that the tests don’t help. And when they check in code with errors that a unit test would have caught, they treat as an unavoidable error. How many times have you seen someone accidently switch the meaning of a boolean flag, and get the exact opposite result they wanted? I’ve seen it dozens of times. A unit test written first would have caught that problem.

Testing is engineering – creating structures that guarantee the code does specific things. But as the dictum goes, “You only need to test the things you care about working”.

Reply

Mahler November 19, 2012 at 4:22 am

Here is what some of today’s job descriptions really mean:

- “Agile Project Manager”: “I really suck at planning. I can only plan one week ahead. I also suck at managing my client’s needs and guiding them through their expectations, and most importantly, making sure they stand on the same requirements throughout the duration of the project. So to justify my lack of professionalism, I call myself ‘agile’, and blame any failure on this technique.”

- “TDD fanatic Developer”: “I suck at analyzing the impact of my code changes. I can not see the whole picture of the system, nor assess what areas QA should test when I make a code change. I am too narrow minded to see the system as a general thing. If my code changes end up breaking the system, I just protect myself saying that ‘All test passed’. I surely never read the General System’s Theory, which states that a system is more than its parts. Hence, a collection of tests will never encompass the whole system”

Reply

Adrian O'Sullivan December 6, 2012 at 9:41 pm

I think the negative comments made here that always testing first is lunacy, ridiculous, fanatical etc. must have been made by people who have not really understood test first.

Until you execute your code, large measures of foresight, reasoning, knowledge, luck and guesswork will be needed to determine how it will behave. Test first eliminates the need to predict anything by making the development process 100% empirical. Because you are *constantly* running the code.

Reply

Adrien December 10, 2012 at 2:52 am

We have a large code-base from the last 17 years or so. It was never written for unit testing, or TDD. Recently one of the new devs has taken it upon himself to roll out TDD into the code, so any module he touches he is re-factoring to get testing in there.

The re-writing is costly, and introduces bugs, and means other devs have to re-learn the new code base. It is also really slow!

There is now a lot of extra code to maintain:

* The test code itself
* the extra code required to make the real code testable

The number of extra interfaces, classes and function calls means call stacks are longer. Code is a lot harder to figure out. One single member function of 20 lines got turned into 3 classes spread over as many files. This makes it a lot harder to get a picture of what the code is doing.

Also since our program is multi-threaded, we are getting more deadlock issues, as separation of key objects out into individually-testable objects has introduced more locking objects, and locking complexity.

I see all these comments about TDD, and everyone comments about proving the result of some function, but we don’t just write functions that take X and Y and return X+Y. Also does your function return the correct value when hit 1000/s from 200 threads, whilst some other function is also being hammered.

So I wonder if it’s the concept, or TDD tools (cppuint) that is the problem, since it doesn’t really even attempt to address issues such as multi-threading, deadlocking, heap corruption etc, which historically have been our biggest issues.

The dev said it gives peace of mind, but actually I find it gives me the opposite.

Is this just a bad implementation of TDD? Or a typical story? Any pointers?

Reply

shalomshachne December 10, 2012 at 2:19 pm

Hi Adrien,

There are a few different issues raised by your post. The first is how to introduce unit tests into a legacy code base. The principle I have always followed (which I got from original Extreme Programming principles somewhere), is that we only write unit tests when we are fixing bugs in the code. That way, whatever risk is introduced by making code changes, is balanced by the fact that there is a business to change the code anyway. This also has some additional benefits: minimizes the extra development cost added by the unit tests, and also minimize the amount of code which needs to be tested after the fix is deployed. I think I also read (wherever I saw that principle), that usually it is small sections of the code that are frequently fixed and revisited. Other sections which are working fine do not need to be touched. I would not advise modifying a legacy code base, *only* for the purpose of adding unit tests.

I have direct experience taking a big, bad, legacy code base (Java – tens of thousands of line of code), and rebuilding with unit tests. As mentioned above, I only changed code when adding new features, or fixing bugs, but I made sure that any change I made had unit tests, AND I also wrote unit tests to make sure existing functions of the same code worked (before I added the new features/fixes). I can tell you we had 2 very direct benefits – lots of spaghetti code was gradually refactored into neater, better designed units, and the quality of releases (as measured by number of bugs in production) improved greatly. Because we now have a large body of unit tests, we have the courage to make major changes to the code, without worrying (nearly as much) about causing major production outages (which happened frequently before).

So that brings up a second point of your post – the refactoring and coding with tests introducing bugs. Unit Tests require just as much thought as the production coding itself. If you write tests for the wrong things, or incomplete number of tests, then your code will be no better (and maybe worse than it was before). The only guarantee you get with unit tests, is a proof that the production code does what the unit is testing for, no more or no less.

The last point I’ll address is that you said your main problems are multi-threading, deadlocking, heap corruption. If you can replicate any of these failures by writing a test, you can be pretty confident that once you get the test to pass, you’ve fixed that problem (or at a minimum, that instance of the problem). At least one benefit you get by adding a unit test for this, is that if someone else later makes some innocent change to that block of code, and reintroduces the deadlock or synchronization problem, you will have a visible test failure to let you know that. In the Java code base, I have definitely been able to use unit tests to replicate production thread race and deadlock conditions, and use those tests as an indicator to show when threading issues were fixed. (And also I saw those tests fail when some other change was made to the code, which broke the original fix).

To summarize, writing unit tests does not allow you to shut off your business sense, or common sense. But they do give you a safety net for maintaining a code base for a long period of time.

Reply

shalomshachne December 10, 2012 at 2:21 pm

Clarification of post above:
we only write unit tests FOR LEGACY CODE, when we are fixing bugs in the code

is what I meant to say. For new code, we always write test while writing the new code.

Reply

Adrien December 10, 2012 at 10:12 pm

Hi

thanks for your insight. In terms of testing for deadlock, heap corruption things etc, we are often talking about events that can be time-related / race conditions, so typically we’ll set up a bunch of machines to bash the @^$% out of our software for long enough for us to feel comfortable about it. It can take several days before we gain a level of comfort.

That sort of testing really doesn’t fit into a unit-testing or even CI / automated (in terms of break build if test fails) model.

Also, how do you unit test a void function? There’s no return. You’d have to somehow inspect what went on inside to see if it’s correct, and I’ve seen code put in that provides dangerous access to things that shouldn’t be accessed, just to enable a test to be written, so the requirement to test it prompted the problem. Those accessors then can get used by others outside of test code and create problem.

I guess we must always be judicious about what we test, but then you get people saying things like “if it can’t be tested it has no value”. So there’s a lot of pressure to make absolutely _everything_ testable.

Actually many of our bugs came about due to non-threadsafe reference-counted copying of std::string in msvc 6. Or missing locks around shared containers (which includes strings).

I’m basically still in a research phase myself about the overall merit of TDD for our code-base – whether we want to proceed with it in our code-base, and / or to what level, because it does have a cost, and the benefit is difficult to predict based solely on experience about the historic costs of tracking and dealing with past issues that would have been caught by such unit tests we currently see as being practicable. I don’t want to say no to it, because I understand there are benefits. I am concerned also though about what it seems to do to design (over-abstraction and increased line-count and complexity). This can create problems when maintaining the code if it becomes less readable / understandable.

A lot of code becomes decoupled where it logically is coupled (e.g. single classes that do several things with the same data get split out), and glue classes or action classes created to bridge the gap we put in.

Surely there’s a way to do it that doesn’t create these other issues?

Reply

Terra December 11, 2012 at 2:12 am

There are other paradigms that don’t rely on injection and decoupling to achieve a good overall automated test coverage. Testing at all the inputs and outputs as a closed system can work in some situations or breaking all code out into static input/output operations that are intrinsically testable and relying on integration testing to cover the framework for those operations. Lets not forget the goal is the quality of the software, if this is compromised by making the code testable it doesn’t matter if the tests pass or fail, the software is already sub standard. Ignore the hype, automated testing is useful, but not at the expense of the software’s design or maintainability. If you can achieve both, perfect, but the reality seems to be that solutions suffer heavily from being designed for test rather than for function.
TDD test first (the purists view) really does not stack up commercially. Even in a single iteration things can change enough that you can end up writing the tests 4-5 times. Despite what some people would suggest this does eat time and most departments are running on a budget and deadlines. If you have a perfect specification, with perfect use cases then maybe it will work, but lets face it, that doesn’t happen very often.
The same thing applies now that has always applied. Think through what you are doing and pick the best approach. Anyone who starts spouting “you must always do it this way” then their judgement is flawed. I would also say beware of what you read online, it can give a distorted view of reality. There always seems to be a bunch of people more than willing to regurgitate what they have read elsewhere, fashinistas of programming, all flash and no function. Don’t be surprised to find that most of them are students, hobbyists, or juniors who have never delivered an entire commercial project in their lives.
So in conclusion, don’t tear your solution to pieces to make it testable, quality will probably suffer rather than improve. Where it is not intrusive test the high risk areas and test exposed interfaces for compatibility. Don’t go testing every line of code, you’ll blow your budget and end up rushing and making mistakes elsewhere to make up for writing 1000s of tests that basically just prove that if statements still work. Tests can also be wrong or incomplete, they support manual testing, they don’t replace it.

Reply

Peter Herz February 1, 2013 at 3:25 pm

While I find TDD strange and cumbersome to how I’m used to developing / testing myself since I’m a code cowboy more than a team coder, I can see how TDD or BDD can help teams or new developers quickly locate problems instead of applications just existing as black boxes to everyone except their authors or those patient enough to study the design patterns, flaws, etc quickly enough not to cause headaches. I see it as now or later someone (you or an unfamiliar team member) is going to have a headache interpreting requirements so why not impose it on the initial developer(s) who will need to re-articulate and debrief the contract their code fulfills not just silently implement it.

Reply

Sam April 21, 2013 at 8:07 pm

One more benefit to add, which might also be useful to you as a “code cowboy”, which is the regression testing that the test suite gives you when you go back to modify your code in significant ways. Instead of having to retest multiple scenarios by hand after you’ve modified that code which has been in production for a year, you can just hit the “Run” button and if it shows green, be confident that your change didn’t break something that was previously working.

I’ve coded both ways, TDD and non-TDD. I find that if I don’t use TDD, then I end up manually testing everything anyway (and often in the debugger, so I can make sure I’m hitting the branches of code that I expect). With TDD, I know exactly what code I’m hitting. It is more tedious and slow to develop this way, but the accuracy is very high.

Reply

Piers Powlesland April 19, 2013 at 4:49 pm

“Even if you write only some tests first, if you want to do it meaningfully, then you either need to zoom down in to tiny bits of functionality first in order to be able to write those tests, or you write a test that requires most of the software to be finished, or you cheat and fudge it. The former is the right approach in a small number of situations – tests around bugs, or small, very well-defined pieces of functionality).”

I think you’ve missed the main point of how to do TDD. Rather than “zooming in” which you don’t want to do since you don’t really know what the requirements are at that level, let your tests drive the interface design for collaborators which at this point will just be mock objects. In this way you can write those top level tests without the need to have most of your software implemented, and then move down a level and write tests for the mocked interfaces etc etc. In this way each interface you design is in direct response to the requests made from the level above.

I cant really see any problems with the methodology I just outlined, but Id be interested to hear what others think.

P

Reply

John Pasko May 8, 2013 at 6:45 pm

On a large project that spans several international teams we have found TDD to be a solid practice. We approach it not from a “Build a Test Case” perspective, but more from a Requirements Mapping need. Take a Requirement/Task from the Backlog and write a test or tests to satisfy that requirement. The code that then passes that test has satisfied that requirement. Our coverage isn’t 100% (Closer to 85-90) but at least we don’t run into a test later or test never situation. All our products have to pass FDA scrutiny and TDD has managed to help along the road. We cover the code and SQA covers the BDD and Acceptance. None of us are drones or cult members, but many of us see the wisdom in “Test-First” approaches.

Reply

Diego May 14, 2013 at 1:43 pm

I don’t think TDD is good, we are compelled to learn so many things for our development that when we start designing we should go through the process without the need of Unit testing. Let’s remember that when we paint or write as artists (if we are such) we do the right thing as if were poured from genius, than we adjust, but the flow must not be interrupted by ASSERT IS TRUE… ? No thanks. I like pair programming instead with the right partner.
So thanks for your lines Author!! I don’t feel so alone now.

Reply

Palaniappan Rajaram August 5, 2013 at 8:38 pm

TDD is something that I recently heard about, touted by the new CTO at place where I used to work. Before I looked up the definition, I thought that it was a good way to develop a software. Later, when I really started looking into it, the following became my thoughts on the matter:

- unless TDD is just unit testing, the developer should not be writing the test cases
- from the requirements document, there should be 2 parallel tracks:
1. developer starts writing the code to address the requirements
2. QA starts writing the test cases for ALL of the requirements, keeping as much of it automated, as possible
- each version of the code should be subjected to the entire test suite
- at first, 99% of the test cases will fail and as the development effort matures, the failed test cases will tend towards zero

Testing is an offensive operation and coding is a defensive operation. This is the reason why QA did not, traditionally, report to the Project Manager because of the conflict of interest i.e. QA’s objective is to break the system as much as possible and should come at the software with a sledge hammer. I don’t see how a developer writing his/her own test cases can be that aggressive.

Probably the mistake that I made was to think that TDD was just developing of ALL the test cases and using that to periodically and frequently checking the entire software for completeness.

Reply

Madhuri August 6, 2013 at 4:29 am

Here because I googled “I hate test driven development”, I wanted to check if I was crazy, not sure yet …

Thanks for the voice of reason! The only places I see TDD being applicable

1. Large teams integrating code of different qualities and trying to make it work (trust is lower)
2. Product development, where a break fix has implications across the user base (stakes are higher)
3. Small enhancement to a large messy ball of mud (CYA)

All other times, lets try succeeding before we fail please?

Reply

Coran August 9, 2013 at 2:17 pm

Why does this come up fourth in my Google results….GoogleFail.

This is nothing more than a subjective, personal, and highly emotive rant, with no examples or logic used to back it up. Using brief and pointless phrases like “fail” shows your unprofessionalism.
You’ve clearly been lectured by some TDD nut, taken it personally, as though he’s making out you’re doing things “wrong”, and now you have a huge chip on your shoulder.

Reply

Jeremy Stark November 24, 2013 at 1:30 pm

Madhuri,

“Seek and ye shall find”. We form our assumptions and biases and then proceed to go forth and “discover” those things that confirm them. To argue against TDD you must either argue against automated tests all together or argue _for_ their proper and most beneficial usage. Anyone who is both for testing and against TDD does our trade a huge disservice when they do not present their own testing practices for edification and review.

Reply

Jonathan Neufeld January 22, 2014 at 12:22 am

TDD is predicated on the notion that consistent adherence to a software’s contract throughout software maintenance and revision will lead to more efficient higher quality software. However, what TDD adherents seem to ignore or be unaware of is the fleetingly short lifetime of a typical software application in the industry. One year is ancient and even if a software application performs perfectly, unless it is overhauled or completely replaced with a brand new implementation users will simply move-on to something else because of this frustrating quality that users have: a never-ending lust for the new and the shiny. Therefore investing 2 months to write non-value add tests for a product that has one year to live can hardly be considered efficient or a good investment.

Reply

Coran February 25, 2014 at 2:31 pm

One year? That seems like a very broad generalisation, and seems highly dependent on the nature of the product.
I don’t know what industry you’ve been in, but the last four companies I worked for had codebases 5-10+ years old. All of which are successful companies, ticking along just fine.

Reply

Jerry Destremps February 6, 2014 at 10:31 pm

In his book, Extreme Programming: The case against XP, the author explains where this whole XP nonsense came from, and TDD was part and parcel with that. That fact is, XP came from a gigantic failure of a payroll project. Period, end of story. So if they failed completely at that, what makes people want to copy their methodology?

http://www.amazon.com/Extreme-Programming-Refactored-Against-Experts-ebook/dp/B004O6LQ5O/ref=sr_1_1?ie=UTF8&qid=1391725766&sr=8-1&keywords=extreme+programming+refactored

Reply

Coran February 25, 2014 at 2:40 pm

I think your comments need some clarification.

The project was going for 3 years before Kent Beck joined.
They almost hit the 1 year deliverable afterwards.
However, a key staff member quit and could not be replaced due to burnout/stress shortly after.
After initially banning XP, DaimlerChrysler have since resumed its use.

http://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compensation_System

And lastly, XP doesn’t guarantee success, just increases the odds..

Reply

Alp February 18, 2014 at 5:13 pm

Most of the people arguing here forget that poster claims test-driven development(Tests driving the design) is bad not testing itself. I agree with him that tests cannot produce optimal architectural designs however it they may create classes that are clean and concise. You can’t expect designs driven by tests to create software architectures that are easy to maintain against changing requirements over time. Thats why software architects exist and have a different skillset than developers regarding how to create good architectures. Maintainable software is the product of the collaboration of good software architects and developers and it requires effort for each project because each project can be very different in terms of requirements.

So don’t expect to get great designs just by writing and analyzing tests because it is just impossible. However tdd is good for refining the interface of a class and tests created form a contract for the class that is always checkable.

Reply

Debasish Bose February 20, 2014 at 1:14 am

Good point. The whole argument about superior engineer, toy softwares are a bit at extremes. Just like people, software products and projects don’t have to have a one-size-fits-all solution.

Often it helps to quickly come-up with something about which my understanding is not yet crystallised (this is the “exploratory phase”) and then iteratively polish it to a point where specifications or requirements can be asserted (the “crafting phase”) against.

Every developer as an individual and any product start-up initially face this “exploratory” phase because of inadequate understanding about the requirement or market respectively. TDD is a bottleneck at this “exploratory” phase where creativity has to flourish and uncertainty has to resolved by research. Once the “craft” phase kicks in, TDD is an excellent fit and can be a life-saver.

For agencies, service companies at large, there is some certainty that the entire business wouldn’t just pivot. This certainty helps to kick in the “craft” phase early and makes TDD so effective. In startups or product ventures where there is a business uncertainty, no point in building a Cucumber/RSpec castle. Similarly, think about developing an iOS app for a product startup that takes a snap of a text-recipe and figure out the nutrition content of the it. Does TDD makes sense here? Nope. We need to kick in the “exploratory” phase first.

Like most of the things in life, TDD is good or bad depending on a specific scenario.

Reply

Julian February 25, 2014 at 4:32 pm

This issue I have with TDD is the name. If you have to develop the tests, you cannot test the development of the tests before you develop the tests. There is nothing to suggest that development of the tests is not what is meant by the word development within the name TDD.

Reply

Tom R February 26, 2014 at 10:48 am

I first learned to program in 1974 and have worked in software development since 1979. There have been many fads and methodologies in that time. Most of them are promoted most vigorously by people that are NOT expert developers! [I find it extremely irritating to to be told how to structure my woirk by a manager that loves TDD or an Agile consultant, neither of whom could write a working 10-line shell script to save their lives.

Almost all of these great methodologies have faded away or been discredited. TDD (and Agile) are just two of the latest fads, and they will be forgotten in due course. It is naive to think that there is any "silver bullet" and it is significant that it is the youngest members of the profession that are most enthusiastic about the ability of the "right" methodology to fix all their problems.

New application areas open up all the time. Modern tools and languages (NOT methodologies), along with amazingly powerful hardware, relieve us of some of the more mundane parts of the work, and speed everything up. But fundamentals of good software development do not change.

Of course testing is necessary. But APPROPRIATE testing. And that depends on the application, the languages, the people, the timescales, the level of reliability required and so on.

[Do not even get me started on the infinite regress problem. How do you you test the tests? And please do not tell me that tests can b e generated automatically. Yes they can. But they test only the most trivial aspects of the code, which a good programmer simply gets right. They do not root out real life bugs, which often arise from unexpected interactions between components, or unintended consequences of the basic design. Those automatically generated tests are the equivalent of a good writer checking the spelling of every word, and the construction of each sentence. Unnecessary and time wasting.]

Of course it is not always necessary to have full detailed documentation or a comprehensive design before starting development. Of course the customer should be heavily involved if that is possible and feasible. We do not need a new methodology to tell us to do what we have always done.

Of course the statements of the Agile manifesto are true. What is more, many organisations were in reality MORE agile, before they were lumbered with all the paraphernalia of SCRUM, Sprints, Stand-ups, and all the rest.

Reply

zhangyang March 28, 2014 at 7:11 am

I am sorry , my english is not very good , so I dot not understand you all .

I think TDD is useful in very small project .But in large scale project , good design is better .

Reply

Tom R April 11, 2014 at 11:24 am

I think software quality would improve if some of these evangelical types would put their energy into learning to design and code well, rather than proselytizing for whatever latest “universal best method” has captured their imaginations.

TDD is not even possible, if taken to its ultimate. You cannot possibly, ever, test every conceivable condition exhaustively. You have to be selective. You have to make a judgement call about what test cases you design and automate.

Hey! Surprise, surprise! That is what we have been doing all along!

Reply

Coran April 14, 2014 at 8:37 am

I think people need to be clear about whether they are talking about people they have met who practice and preach TDD, and TDD as a methodology itself. 90% of the comments here don’t refer to TDD itself, but some persons personal and mistaken interpretation which you had the misfortune of meeting and hearing.
TDD does not say “You must use it at all times!”, when and how you apply it is up to the user and their own intelligence.

http://blog.8thlight.com/uncle-bob/2013/03/06/ThePragmaticsOfTDD.html

Reply

Leave a Comment

{ 5 trackbacks }

Next post: