One of the nice things about "best practices" in software development is that there are so many to choose from. One of the less nice repercussions is that choosing between them means exposing yourself to what could charitably be called a fashion-driven buzzword-slinging mudpit of logical fallacies and hastily generalized arguing. Welcome to the Internet.
Pete Sergeant posted a rant about TDD that's true in places, but goes too far in others. He asked me if I believe that "all code needs its tests written first, always".
That's a nope.
I'm working with a couple of friends on a personal finance site, and I wrote
a tiny little program to repopulate the pool of invitation codes. It's a loop
around a SQL query managed by DBIx::Class and some
business objects already written and tested. It's glue code. I didn't write
tests for it, because those tests would have no value. (Honestly, if DBIC's
create
method doesn't work, I have bigger problems than my 9 line
program.)
Peter's question is bigger than "Hey, do you ever write trivial programs?" Perhaps a better phrasing of the question is "What value do you get out of TDD?", as that offers the chance for nuance and technique in which you can decide when and where it makes sense as a design technique.
Design? That's the primary value I get out of TDD. When I have a feature I want to add or a bug I want to fix, I can express that in a sentence or two, such as "When users mistype their passwords, link to the password reset page with their email address as a query param. Populate the password reset input field with that address." While this is primarily a UI/UX change, it implies a couple of changes in the controller as well.
The most obvious place to test this behavior is in the web interface tests. I can write a couple of lines of code which express the right behavior on the login failure side and a couple of lines of code which express the right behavior on the password reset side.
As I wrote the previous paragraph, I realized that that use case is actually two features, and they're somewhat decoupled. The shared interface between the two is the generated URL with the user's email address as a query parameter. If I can assume that both sides agree on using the URL in the same way, I can test and implement this feature in two steps.
That's good; that means I can work in smaller work units. It also means my tests can be smaller and more encapsulated.
When writing tests for the login failure portion, I might stop to think about edge cases. What if the user has never registered? What if the user has mistyped her email address? What if the user hasn't verified her account?
I might decide to ignore those possibilities for security purposes (why give an attacker any extra information?), but I find that the act of writing tests for very small pieces of behavior helps me to consider my invariants in a way that writing my code first and testing later doesn't.
With a simple test written—request login with an invalid email address—, I expect a failure. It happens. It's easy enough to update the template with the appropriate information and run the test again. It should pass. If not, I have a tiny little change to debug. If things work, I can move on. (What if the password is wrong? That's a new test, but the same behavior should work. Repeat.)
When all of the relevant tests I can image pass, I spend a couple of moments looking over the new tests and code I've written. Did I get it right? Does it make sense? Is there duplication to remove? (I can't tell you how many times I've found two pieces of code that are sufficiently similar that I've extracted out duplication. This includes my tests.) I may decide to wait to refactor, but I have that opportunity.
Then I repeat the process for the other half of the feature. The whole process takes less time than explaining how the process works.
That's the ideal, anyhow.
Yet when I work in bigger steps or when I write tests after the fact, my perception is that I spend more time debugging and I write code that's more difficult to test. Certainly I deploy more bugs than when I write tests in conjunction with the code under test.
Notice something about what I wrote though; or notice something that I didn't mention in my writing. I said nothing about setting up an invasive dependency injection framework with everything abstracted out with lots of lovely mock objects and extreme isolation of components. That's because I don't believe in using invasive dependency injection frameworks to abstract everything out with ubiquitous mock objects and extreme isolation of components. That's a one way ticket to Sillytown, where your code passes its tests but doesn't actually work.
I'm testing real code. Where it doesn't work (as was the case with a bug I fixed an hour ago), it's because of a difference between our testing environment and our deployment environment in terms of external configuration. (The test for that passed in our system because it's the only place we use a mock object, and it's only a single test.)
Should you build lots of test scaffolding and use lots of cleverly named frameworks so that you can reach inside methods and make sure they call the right methods with the right parameters in the right order on the right mock objects something else has injected from elsewhere? Again, no.
I'm not sure what that has to do with TDD though.
In my experience, TDD works best for me when I:
- Use my code as realistically as possible
- Work in small steps
- Commit often
- Break tasks into smaller tasks when it's obvious I can test and implement them separately
- Take advantage of refactoring opportunities whenever possible, including in test code
- Think through the edge cases and error possibilities as I test
- Have an idea of the design from a high level ("I need to call or implement these methods, and I can reuse that code")
- Have an existing, albeit minimal, scaffolding of representative test data
If I've never used an API before, TDD might not be appropriate. If I'm never going to use the code again, TDD might not be appropriate. If it takes longer to write the tests (Nine lines of code in the final program? Probably not worth it now.) than to write and run the code, TDD might not be appropriate.
If, however, I think I'm going to regret not having designed the code and tests in parallel, I use TDD. It might not work for you. It works very well for me.
(Should you use TDD's test-fail-code-pass-refactor cycle? That depends. Do you have the discipline to write good tests? If not, perhaps you're better off not using TDD. You might get yourself into the kind of mess that's apparently burned Peter one too many times.)