One of the pleasures of white-collar work is that you can earn money even while you're not on the job. Unlike a factory worker paid for each widget she assembles, a programmer or publisher or writer or business owner can sell widgets even when sleeping, or eating, or on vacation, or walking the dog.
Thank goodness for the automation of things like the cotton gin, the industrial revolution, the semiconductor, and the information economy.
The downside, of course, is that you have to trust these automations—and they're built by humans and programmed by programmers. (Worse yet, sometimes we are those programmers.) Things go wrong.
Our job is then to find and fix these bugs before the expense of lost potential automation overcomes the value of the automation.
Too bad we're lazy.
I built my own stock screener. It uses public APIs and public data to fetch financial data so that friends and family can make better investment decisions. I don't want them to have to fill out complex spreadsheets, and I don't want to type the right incantations to generate these spreadsheets. That means they get a little box on a web page and type in the symbol of a company and get an analysis later.
... unless they mistype the symbol name, or the company has been acquired, or it's moved between exchanges, or one of a dozen other things has gone wrong. For example, one API happily understands that Berkshire Hathaway has two classes of stock, trading under BRK.A and BRK.B, while another API goes into fits when it encounters a period in the name of a stock, and you have to call BRK.A "BRK-A" instead.
These are the cases where mock objects are no substitute for domain expertise.
The interesting cases are where you wake up on a Monday morning to find that something went wrong with several points of data over a long weekend and you're not sure what.
Sure, you could fire up your debugger, add in one of the offending symbols, and trace the process of API calls and analysis from start to finish, restarting the process as you try hypotheses until you reach an understanding, or at least have to get back to other work. I've done that. You can go a long way with that as your debugging technique.
Lately I begin to suspect that best practice patterns exist for batch processing projects like this. I've already realized that the process of a multi-stage analysis pipeline (first verify that the stock symbol exists; then get basic information like exchange, name, outstanding share count, sector, and industry; then analyze financial information like debt ratios, free cash flow, cash yield, and return on invested capital; then analyze current share price to projected and discounted share price) is, effectively, a state machine, and that by tracking the state of each stock in that state machine, you get both idempotent behavior (you'll never double-process a state, but you can restart a stock at any stage of the process by changing its state) and the ability to identify errors and bail out at any stage of the process (if a stock symbol doesn't trade on an associated exchange, you're not going to get any good information out of it, so avoid the CPU-expensive forward free cash flow projections, because they're useless; if a company's free cash flow trends negative, don't do the financial projections because you don't want to buy a company losing money and the numbers get asymptotically weird as you cross that zero line anyhow).
The real insight is that you should log the relevant information when an item in the pipeline hits an error condition. Sure, you can sometimes run into transient problems like a backhoe cutting fiber between you and your API provider such that your cron job won't get relevant data for a few runs, but you also run into the problem that what you expected to happen just didn't happen. That is, while you've been certain that the fifth element of the JSON array you get back from the request always contains the piece of information you expected, it never contains the information you want for companies in the finance sector, so you can't perform that analysis with that data source.
The real questions of automation are "What do you do when things go wrong?" and "How much information do you need to recover and to prevent these errors from occurring again?"
Maybe that means dumping out the entire API response. Maybe it means emitting the raw SQL query that caused a transaction failure. In my case it certainly means separating the process of fetching data from the process of processing data, so that I can load example data into my system without having to go through the fetching stage over and over again. (This experience of decoupling makes me prefer fixtures to mock objects.)
In my experience, it means that I can run a simple query:
> select count(*), state from stock group by state;
2413|ANALYZED
10|ERROR_DAILY_RATIOS
100|ERROR_UPDATE_BASIC
... to see that I have a few things to debug today, with fifteen dumps of API data in my basic_update_errors/ directory to review when I have a chance to see where my expectations (or the API documentation) has gone wrong.
In a sense, I've automated away half of the debugging process already: I have example data and I know where things go wrong. I know exactly where to look for the errors of assumptions. I don't know what those assumptions are, nor why they're wrong, but a computer probably couldn't tell me that anyway. Even getting halfway through this process means I'm twice as productive—I can focus my energy and concentration on the hard parts, not the tedious ones.
Well that's what you get for knowing what you're doing. I'm still more
comfortable with Bash and Vim than with Perl, so my "solution" would be a shell
script calling a jumble of 'wgets', tr's, sed one-liners, and Vim :bufdo's
having their various ways with a few hundred download files, intermediate
files, and output files with suffixes like 'html', '.folded' and '.rstripped'.
Like that time I scraped the librarything.com site for the ISBN numbers of my
own books so I could let my computer find out what they were selling for used
at Amazon....
Anyway, the one thing I can say for my way (besides I know how to do it) is it
exposes the guts of the process in a way that makes it easy to know where to
look if something goes wrong. And, as you touch on, I can fire it back up
again at any point in the process if it gets interrupted. Also, nobody can
accuse me of premature optimization, and the jerry-rigged look of the thing
highlights rather than obscures the fact that I'm using "API's" that are really
no such thing; the hack looks like a hack.
I guess that's actually more than one thing I can say for it.
There's remembering where to look to find out what happened and there's designing it so you don't have to remember. My experience in programming has been learning the hard way to do the latter.