Jonathan Swartz makes a polemic statement:
cpanm and perlbrew should not run tests by default.
His points are reasonable, but his complaints are mostly about side effects and not the real problem. (I should clarify: the real problem I encounter.)
If running tests slow down installs, speed up the tests. (Do you want to get the wrong answer faster? Easy: it's 42. No need for a quantum computer to do the calculation in constant time. This algorithm is O(0).)
If running tests exposes the fragility of the dependency chain, improve the dependency chain.
If dependency test failures prevent the installation of downstream clients... this is a weakness of the CPAN toolchain. A well-written test suite for a downstream client should reveal whether bugs or other sources of test failures in a dependency affect the correctness of the client.
Note the assumptions in that sentence.
Anyone who's experienced the flash of enlightenment that comes from working with well tested code and who's shared that new zeal with co-workers has undoubtedly heard the hoary old truism that testing cannot prove the complete absence of bugs. It's no less true for its age, though it's also true that good testing only improves our confidence in the correctness and efficacy of our code.
For me, a 95% certainty that my code works and continues to work for the things to which I've tested it is more than sufficient. I focus on testing the things I'm most likely to get wrong and the things which need to keep working correctly. (I don't care much about pixel-perfect placement, but I do care that a book's index uses the right escapes for its data and markup.)
Without tests running on the machines themselves in the environments themselves where I expect my code to run, I don't have that confidence.
Put another way, I'm either not smart enough or far too lazy to want to attempt to debug code without good tests. That's why I write tests, and that's why I run them obsessively. That's good for me as a developer, and you're getting the unvarnished developer perspective.
I also care about the perspective of mere users. (Without users, we're amusing ourselves, and I can think of better ways to amuse myself than by writing software no one uses.).
Yes, an excellent test suite can help a user help a developer debug a problem. Many (most?) CPAN authors have had the wonderful experience of receiving a bug report with a failing test case. Sometimes this even includes a code patch.
Not all users are developers of that sort, nor should they be.
The CPAN ecosystem has improved greatly at automated testing and dependency tracking, but we can improve further. What if we could identify the severity of test failures? (We have TODO and SKIP, but they don't convey semantic meaning.) What if we could identify buggy or fragile tests? (My current favorite is XML::Feed tests versus DateTime::Format::Atom because it catches me far too often, it doesn't affect the operation of the code, and it's a stupid fix that's lingered for a few months.) What if the failures are transient (Mechanize relying on your ISP not ruining DNS lookups for you) or specific to your environment (a test suite written without parallelism in mind).
As Jonathan rightly implies, how do you expect an end-user to understand or care about or debug those things?
I'm still reluctant to agree that disabling tests for end-user installations is the right solution. I want to know about failures in the wild wider world. I want that confidence, but I can't bring myself to trade away that confidence for the sake of a little more speed of installation.
Yet his point about lingering points of fragility in the ecosystem are true and important, even if the proposed solution of skipping tests isn't right. Fortunately, improving dependency management and tracking and use and testing can help solve both issues: perhaps to the point where we can run only those tests users most care about and identify and report material failures in dependencies.