I don't know where this idea started in the testing world (I suspect Java, which makes the easy things possible, and the hard things a mess of XML and bytecode generation to combine early static binding with the best type system 1969 and a PDP-7 had to offer), but if you find yourself wiring up a bunch of mock objects to test your system in isolation and pat yourself on the back for writing clever testing code, you'd better be lucky, because you're probably not testing your software well.
First, some philosophy.
Socrates: What's the purpose of testing your code?
Tester: To give us confidence that our software works as designed.
Socrates: How do you test?
Tester: Each piece in severe, bunny-suited, clean-room isolation.
Socrates: Why?
Tester: Because they must work in isolation.
Socrates: Does the system not work as a whole?
Tester: It does.
Socrates: How do you know?
Tester: Because we also test it as a whole.
Socrates: Does this give you no confidence in the correctness of the system?
Tester: No.
Socrates: Why not?
Tester: Because we only have a few tests of the system as a whole.
Socrates: Why? Surely the correctness of behavior and coherence of the system as a whole is important to the system as a project, else why would you be building it?
Tester: But unit tests are the most important tests. Someone somewhere once said so, and I have this really neat framework which generates mocks and stubs if I define the interface in my IDE and wire up the XML output.
Socrates: Does this give you confidence in your system as a whole?
Tester: Well, it's a real pain sometimes keeping the interfaces of the mock objects and their behaviors up to date as we change the code, but that's what a refactoring IDE is for, right? Sure, sometimes we have to add tests of the system as a whole because we find bugs, but you can't test everything to 100% anyhow, can you? Besides, we're using best practices.
Socrates: How do you test database interactions?
Tester: We use mock objects to simulate a database.
Socrates: Why don't you use a real database?
Tester: They're hard to set up and slow and we don't want to spend time debugging things like connection issues in our tests!
Socrates: How do you know your database code works?
Tester: Our mock objects work.
Socrates: When you go out to lunch as a team, do you go to a real restaurant and order food, or do you sit around in a circle pretending to eat sandwiches?
Tester: pantomimes being trapped in a glass box
(Pun mostly not intended.)
Yeah, I wrote a mocking library for Perl, many many years ago. Note well the short description:
Perl extension for emulating troublesome interfaces
I chose the word "troublesome" with care.
I almost never use this module, despite the fact that I wrote it. Sure, Perl and other late-bound languages with serendipitous polymorphism and allomorphic genericity make it easy to swap one thing in for the next if you can treat them as semantic equivalents. Yet in truth, mock objects are far, far overused.
In my experience, mock objects are most useful in very few circumstances:
- When you want to test an exceptional condition it's difficult or expensive to produce (system error, external dependency failure, database connection disappearance, backhoe cuts your network cable).
- When one tiny piece of an existing piece of code has a side effect you cannot easily control (the actual SMTP-over-a-socket sending of email, the actual purging of all of your backups, the actual adding of butterscotch chips to what would otherwise be a perfectly fine cookie recipe).
- When you are utterly unable to control a source of information, such as data pulled from a remote web service (though you can design and test this with a layering strategy).
- That's it.
I emphasize for clarity that that list does not contain "I am talking to a database", or "I am rendering an image", or "Here is where the user selects an item from a menu".
It's still important to be able to test your software in layers, such that you have a lot of data-driven tests for your data model and business logic without having to navigate through your UI in your tests, but the fact that you can automatically generate mock objects from your interfaces (or write them by hand, if you're using a language which doesn't require Eclipse to scale programmer effort beyond tic-tac-toe applets) doesn't mean that you should.
For example, one of my projects requires email verification of user registration. I have automated tests for this. One of them is:
my $url = test_mailer_override {
my $args = shift;
$fields{USER_invitation_code} = '12345';
$ua->gsubmit_form( fields => \%fields );
$ua->gcontent_lacks( 'Security answers do not match' );
$ua->gcontent_lacks( 'This username is already taken' );
$ua->gcontent_contains( 'Verify Your Account',
'... successful add should redirect to verification page' );
my ($mailer, %args) = @$args;
is $args{to}[0], 'x@example.com', '... emailing user';
is $args{to}[1], 'xyzzy', '... by name';
my ($url) = $args{plaintext} =~ m!(http://\S+)!;
like $url, qr!/users/verify\?!, '... with verification URL';
$ua->gcontent_contains( 'User xyzzy created' );
return $url;
};
The function test_mailer_override()
is very simple:
sub test_mailer_override(&)
{
my $test = shift;
my @mail_args;
local *MyProj::Mailer::send;
*MyProj::Mailer::send = sub { @mail_args = @_ };
$test->( \@mail_args );
}
... where MyProj::Mailer
is a subclass of Mail::Builder::Simple.
This code temporarily monkeypatches my mailer class to override the
send()
method to record its arguments rather than performing an
actual SMTP connection to my server. Not only does this run faster than it
would if I had to wait for SMTP delivery before continuing the tests, but it
avoids the need to set up a local mail server on every machine where I might
run the tests or to hardcode mailer credentials in the test suite or even to
need an active network connection to run the tests.
This makes the tests run quickly and gives me great confidence in the code. Furthermore, I know that on the real machines where I have this code deployed and running, the mail server and configuration works because I get real mail from it.
(I could as easily make my own subclass of my subclass which overrides
send()
this way and pass in an instance of that subclass through
dependency
injection. That would work well if I had to mock more than one method, but
I haven't needed that yet and this localized monkeypatching was even easier to
write and to maintain, so it serves me well enough for now.)
As for tests of database activity, I use DBICx::TestDatabase to create an in-memory test database for each test file in my suite. I have a batch of representative data carefully curated from real data to cover all of the important characteristics of my business model. I don't have to worry about any potential mismatch between data objects and mock objects and the real database because everything about my database is real except that it never actually exists on disk.
(If I had tests that care that it exists on disk—and I can only imagine a few reasons why I might—I would have stricter tests run on machines intended for production. If that's a concern, I'll do it. It's not a concern.)
I do understand the desire for mock objects and loose coupling and beautiful architectures of free floating components interacting briefly like snowflakes gently tumbling to a soft blanket on a Norman Rockwell Christmas landscape, but my code has to work correctly in the real world.
That means I have to have confidence that my systems hang together as working, coherent wholes.
That means that the details of what my tests do—their actions and their data—have a strong coupling to the behavior of my systems, because I have postulates and expectations to verify about those systems.
That means that I don't have time to duplicate behavior between real code and mock code, because I really only care if the real code works properly, and anything which distracts me from that, especially while debugging a failing test, is in the way.
That means that I will use mock objects sparingly, when I can't achieve my desired results in any cheaper, faster, or more effective way... but I won't mistake them for the real thing, because they're not.
I'm not sure what your point is. Maybe one or more of:
I don't see a big distinction between mock objects, monkey-patching, test databases, etc. They are all techniques to simulate parts of a system, usually externalities.
Testing itself is simulation. The art of testing is finding the right abstractions to simulate behavior to confirm correctness to the right level of confidence (where the definition of "right" in each case will be different for different code, developers, companies, whatever.)
Is a test database (a "mock database") any better or worse than a mock object that simulates a database connection? That depends. For me, I'd say it's probably whichever is faster to implement; whichever is more likely to faithfully replicate behavior found in the real world; whether I'm trying to test requests or responses; and on what assumptions are acceptable about the correctness of the database itself.
That logic extends to other parts of your argument -- they all depend on the definition of "troublesome" for the nature of a particular test, but your definition of "troublesome" might differ from anothers'. Maybe your argument reduces to "only mock what is troublesome and don't mock for the sake of mocking if that complexity is more troublesome than the trouble you're avoiding". I'd accept that, but still think the devil is in the details.
Your straw-man Socratic argument asks "why test in isolation?" and gives a facile answer. A few more-thoughtful reasons I can think of off the top of my head include:
tl;dr summary:
Excessive reliance on either unit testing or full-system testing is misguided. Mocking is one of many techniques to construct a test simulation. For any situation, pick a simulation technique to give the greatest confidence in correct behavior for the least amount of trouble. Actually think what those mean to you and don't get stuck following anyone else's dogma.
My point is that the use of mock objects is a code smell.
If every unit (or most units) need mock objects for you to test them to your satisfaction, your tests are likely fragile.
chromatic wrote:
Do you have empirical evidence for this?
Nothing rigorous. How would you collect that across the body of tested code? Most of it's not visible. How would you measure whether mocked code is more fragile than unmocked code such that you can compare fragility across projects?
I can only report what I've seen.
You didn't mention the primary (almost only?) reason I use mock objects:
Could be B is a PITA to create, or I just don't want to risk changes (or breakage) in B to screw up my tests, or whatever. Even so, if changes (or breakage) in B could screw up my tests, maybe they should fail (if they'll screw up my tests, why wouldn't they screw up my code?). Basically, I use mock objects as a last resort, and I never mock the actual thing I'm testing.
So I think I mostly agree with you.
I know how this is going to sound (and I know it's not always possible), but in my recent projects I've tried to make it as easy as possible to create a real B and use that. I've been fortunate that these projects are relatively small and young.
With that said, every time I've started with mock objects in the tests (for the reason you suggest), I've regretted it and have refactored dependencies to be able to use real objects. Maybe my rule is best phrased as a goal of refactoring: as few mock objects in the tests as possible.