I like to build applications in layers. You've probably heard of MVC, which is popularly misunderstood to mean that you have business objects, a UI layer (even if it's web templates), and some logic to connect them.
Unit testing purists suggest that you need to test each of these layers in isolation. I find that silly; I prefer to test my applications in terms of the behavior they provide to various users. If an asynchronous process without a user interface manipulates business objects, I want to know that the operations necessary work as intended. Similarly a web application that doesn't ever test that the HTML forms provided to real users match the parameters processed by the controllers is a web application that's going to break sometime.
One drawback of this integration-style testing (as unit testing purists might call it) is that debugging a problem in one layer isn't always as easy as it would be if you had comprehensive tests for that layer in isolation. The tradeoff is almost always easy for me to make; I know that the behavior I want my applications to support is tested in terms of every layer. Furthermore, I don't have to pay the cost of writing and maintaining unit tests to enforce this isolation.
Instead, I occasionally pay the cost of debugging.
For example, one codebase has a database-backed persistence mechanism provided by some Moose metaprogramming. It's a little bit clever, but it has a very nice interface and it's simplified most of the rest of the code around it dramatically. The persistence mechanism goes through SQL::Abstract::Complete and eventually the DBI.
My task list includes the ominous task "Write a better error handling mechanism", but I haven't made it there yet.
When a query fails, we don't yet get good information about what fails. (Fortunately, the part of the code which tends to exhibit test failures when we change something has been the NoSQL component, for various definitions of the word "fortunately".)
In cases like this, I do what most programmers probably do. I resort to print-style debugging.
Most of the persistence calls end up going through a method called
_make_sth
, which does exactly what it sounds like it does:
sub make_sth {
my ($self, $sql, @vars) = @_;
my $sth = $self->prepare( $sql );
$sth->execute( @vars );
return $sth;
}
I usually end up adding a line or two of debugging code:
sub make_sth {
my ($self, $sql, @vars) = @_;
my $sth = $self->prepare( $sql );
$sth->execute( @vars );
::diag( "<$sql> [@vars]" ) if $ENV{DD_DB};
return $sth;
}
... which, as you can see, gets toggled by the truthiness of the environment
variable DD_DB
.
In my tests, I simply local $ENV{DD_DB} = 1;
just before the failing test case and examine
the output.
(Yes, I should run the debugger and figure out an easy way to toggle it to break on the given line, but I'm insufficiently lazy for that. Yet.)
The nice part about using a dynamic environment variable—where the value of the variable is scoped to the enclosing block—is that I can run an entire test file and get only debugging output for the code I want. Yes, it's a global variable. Yes, it's lazy. Yes, it's bad in the sense that all print-style debugging is bad, but it does the trick far too often for me to give it up without an amazing replacement.
I find that Enbugger works rather well for that type of debugging.
Yes, I should run the debugger and figure out an easy way to toggle it to break on the given line, but I'm insufficiently lazy for that. Yet.
Couldn't you just change your diag line to instead be:
Then still do your local $ENV{DD_DB} trick? I usually make a variable in the $DB namespace, e.g.:
I'm not sure local will be very different but is an interesting idea.
Looks like you might have cut and pasted your code example incorrectly -- the tipoff is that the $sql parameter isn't used in make_sth method.
The $sql is what should be passed to DBI's prepare method, not $sth.