(or Always a Release Candidate, Never a Release)
In How to Ruin Your Ability to Release Software I described the release process a colleague's workplace uses. That discussion started with a complaint about a subject near and dear to my heart: release candidate cycles.
The Semantic Dance
A release candidate is a build of the software that everyone hopes has no bugs users will care about. In truth, everyone knows it has bugs, and so it's actually a release made to prove that you intend to release software sometime, and if no one reports any bugs in it, you can tell them it's their fault that, when you finally get around to releasing software, the really-released-this-time-no-foolin' has bugs, even if you knew about them before you released the release candidate.
My colleague's project has one very simple metric for when they can produce a release candidate: the project has no bugs labeled "unhandled" in the bug tracking system. In other words, the difference between a build of the software suitable for end users is whether someone has changed the label of every new bug in the bug tracking system. (While you can use the number of bugs as an indicator of some information about the quality of the software and the rate of change in the number of bugs as a derivative of the quality of the software, keeping in mind popularity and availability, I'm not sure the text of particular UI widgets in the bug tracking system even has the whiff of a usuable metric. Talk about data-driven programming!)
Of course, that doesn't work, and (if anyone actually tries the release candidate -- they're universally buggy, so why would you?) people report bugs, and all the project wanted to do was finally release a new version of the software, and so it's time to stop doing everything else and fix all of the bugs without making any other changes to the system.
If you're very unlucky, or if your project manager is actively hostile or didn't understand the sentence in Dr. Royce's paper which says "This does not work", this is the first time a separate QA group has seen the software.
Preventing Change at All Costs
The goal of a release candidate is to be completely boring. Nothing should happen. Users and testers should discover no bugs. Nothing should go wrong. If anything, you should lose weight, look ten years younger, and drive a nicer car.
The point of having release candidates at all is so that the final, eventual, we-really-did-it release will be that boring. Rumor has it that China will hand-deliver a lovely fruit basket to the Dalai Lama if this ever happens.
Projects with release candidates often create a branch in their source code repositories to represent a stable point for development. For "stable" read "nothing can ever change, unless it's to fix a bug, and then it's only the most minor change possible." In other words, stability (does the software build, does it meet customer needs, does it work?) is less a goal for day-to-day development than it is for the once-in-a-blue-moon crunch time when someone realizes that software that you never release to customers is worthless.
The reason for creating a stability branch is so that developers can continue to develop software without worrying about those pesky concerns... yet. There's always time later to fix things. You can see this attitude in bug triaging and bug fixing. "This bug isn't very important. We can downgrade its severity. This bug is too hard to fix the right way. Let's just hack around it."
Of course, developers have to wait to hear back from QA -- and in organizations with a strong barrier between dedicated QA and developers, you won't see developers looking for their own bugs. Developers will go off and build the next big wad of code to cram into a pending release right before the next release candidate branch branches.
Imagine the tangle of merging from branch to branch. Imagine the work involved in unraveling minimally intrusive hacks and fixing bugs the right way. Imagine the arguments from a developer who wants to run off and write new code and hates to hear that code he wrote six to eighteen months ago could never have worked, and a QA person who knows that he'll lose this argument and get chewed out for the shoddy quality of the release.
At just the time when a project's quality needs to increase, the management structure of the project acts to depress its quality by hiding bugs, splitting development efforts, and actively preventing feedback on efficacy and suitability.
My favorite bad example comes from Perl 5, where all development takes place on a branch called bleadperl. Another branch represents code which will become Perl 5.10.1, and likewise Perl 5.8.10 or Perl 5.6.3 or whatever (Perl 5.8.10 is unlikely, but this development antipattern held for the 5.8.x series). The person in charge of releasing a new stable version of Perl spends a day or so every week merging changes from bleadperl to maintperl. A significant percentage of the work invested in a new stable release of Perl is manually merging patches already committed between branches.
If this person falls behind or goes on vacation, the differences between bleadperl and maintperl increase, and the time to produce a new stable version of Perl increases.
(Now imagine if all development took place on branches and only merged to trunk when they were stable; suddenly there's an extra day a week available for Perl 5 development.)
Backwards Day
If you'd never seen software development before, you might think that the normal rules of life do not apply here. Unfortunately, they do -- it's the results of this software process that go wrong.
If you don't know if your software works, why are you releasing it?
If you're not sure if your software meets customer needs, why are you releasing it?
If your developers can't keep the software stable, why should you believe that they'll stabilize it later?
If you play games in your bug tracker, why should anyone trust it?
If you can only guess if a release is worth using, why would anyone use it?
Why are so few people asking these questions about their projects?