Stop Preventing the Future!


One of my goals with Modern Perl is to improve the entire Perl ecosystem for both Perl 5 and Perl 6 such that everyone can take advantage of all of the wonderful improvements already provided and yet to come. First, we have to convince people that that's possible.

In Sacrificing the Future on the Past's Golden Altar I mentioned that Perl 5's deprecation policy has harmed Perl 5 over the past decade, if not longer. Several people asked me for a better alternative.

It's no coincidence that I've worked on Parrot for the past several years. At the most recent Parrot Developer Summit last December, we discussed our support policy for Parrot as we near the Parrot 1.0 release. I've just finished writing the initial version of Parrot's release, support, and deprecation policies. (I apologize that it's in raw POD form; we'll add it to the website soon.)

I don't want to get into too many details about deprecation and support, nor how aggressive the Parrot schedule is for the foreseeable future, but I do want to explain some of the reasoning. It's important for all projects, not just large and, we hope, successful community-developed projects.

I believe strongly that the best way to invent the future is to iterate on a theme. That's part of the reason I write these posts -- I'm trying out new ideas on a growing audience of smart, dedicated, and committed readers who rarely hesitate to challenge my underthought assumptions or ask for clarity when I've been obtuse. The same principle goes for software.

If you know exactly how to solve a problem before you've written any code, it's worthless to solve the problem yourself. Re-use existing code, then spend your time and resources on something that matters more.

If you don't know exactly how to solve a problem, you're unlikely to find the best solution on your first attempt. That may be fine. Your first attempt may be good enough. If so, great!

The problem starts in so many cases where the first attempt isn't perfect and needs further work. We call this debugging. Usually it's also a design problem.

Two complementary schools of thought address this problem from different approaches. The agile movement suggests that working in very small steps and solving small pieces of larger problems in isolation helps you avoid thrashing and rework and all of the organizational problems you have when you're trying to solve very large and very complex problems. The refactoring school suggests that very focused and reversable changes to the organization of code and entities within the code make it easier to write good code in the future.

It's possible to have one without the other, but they build on each other.

The allure of both approaches is that they promise to free you from the golden chains of "I Must Get This Completely Right The First Time." You don't. You do have a minimum standard of quality and efficacy, and it's important to meet those goals, but they make change less risky and even cheap. I didn't say that practicing either one is easy or simple, just that I know of no better way to reduce the risk of mistakes. If they're small and easy to detect and easy to fix, you don't have to worry about making them.

Of course, this only matters if you're going to change your software in the future. If you write a program and run it and you don't need it in ten minutes, none of this matters. If you write a program and install it on a machine and it can run for the next year or ten years untouched, none of this matters. The cost of change is irrelevant for software that never changes.

Most of us rarely have the luxury of writing software that never changes.

Perhaps there's a common illusion that people who write software for other coders to reuse in their projects -- whether languages, libraries, platforms, or tools -- should meet a standard higher than most other projects. To some degree it's true. Many projects which get widely used attract better developers and development strategies. Many don't.

Yet I don't believe there is a general solution to the problem that we don't get code and design right on our first try. We make mistakes designing languages and libraries. We make mistakes implementing platforms and tools. Sometimes the best we can do to make things righter is to make an incompatible change. As long as our code gets easier to use and maintain over time, I can live with that.

The question isn't "Should a project make backwards-incompatible changes?" (The question very much isn't "Should a project do so gratuitiously?", so if you want to argue that point, please do so elsewhere. The real question is "How do you do make incompatible changes when necessary without hurting your users?"

I'll discuss some ideas next time.


Just one more idea to the list - sometimes it is possible to avoid the problem with incompatible changes by insulating the changing parts behind an interface. That is you by some up-front design you can enlarge the field of "compatible" changes and let the software evolve. I don't say that it would solve the problem - but I think it can be applied more widely in the Perl world. The reason that it is not - is partly because the dynamic typing of Perl is already quite good in this insulation game - duck-typing means that quite a bit of changes can be compatible here while they would not be in a language with static types.

Sometimes a little pain is good for the body.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide



About this Entry

This page contains a single entry by chromatic published on February 6, 2009 3:58 PM.

Sacrificing the Future on the Past's Golden Altar was the previous entry in this blog.

Easier/Better Over Time is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Powered by the Perl programming language

what is programming?