Perhaps the most useful distinction between programmers who end up as architecture astronauts and working programmers who continue to build useful software is a streak of pragmatism. While maintainability is a primary concern of maintaining software, it's not the only reason someone might invest in developing software, for example.
This lesson is more subtle than it seems. You can see this in the decade-plus argument over the extreme programming and agile software development ideas, where an idealist may say "Why would we do something like fill in the blank?" and a pragmatist might respond "You don't have to do it, but we found that it worked pretty well for us," and the idealist hears that there's no grand unifying structure by which you can follow a checklist and get great software.
I like that paradox. I like contradictions, at least until I have to explain to a client that, while adding a feature is important, adding it correctly enough is also important. I've been burned a few times on trying to beat deadlines by skimping on quality of one form or another, and I've been burned a few times by spending too much time on things that just don't really matter for the sake of some arbitrary ideal.
I try to analyze technical decisions in terms of desired value compared to perceived cost. For example, it would be technically correct and useful to have a program that's mathematically proveable to terminate and to give the correct answer for every class of input, but it's practically infeasible to do so for all but the most trivial programs (which provide little practical value to prove).
Similarly, I've removed entire test cases from test suites which verified trivial and uninteresting and useless information about the test files (usually about metadata) because they were expensive to run and to maintain and actually hindered us from fixing the real problem in a better place.
Tests can be expensive to write and expensive to maintain. Poorly written tests can be fragile and misleading. Tests with hidden dependencies or assumptions or intermittent failures can cost you a lot of debugging time and anguish.
That's the potential risk. The potential reward is that you get more confidence that your software behaves as you intend and that it will continue to do so, as long as you pay the testing tax.
Good tests hate ambiguity. (Good code hates ambiguity.)
For example, a bad test which exercises passing invalid data to a web action might use a regex to parse HTML for the absence of a success message. A better test might use CSS or DOM selectors to verify the presence or absence of a single indicator that the request succeeded or failed.
To me, that specificity is the most important thing. It's not "How few tests can I write to get 100% coverage?" because writing as few tests as possible isn't my goal. Neither is my goal "all tests should look the same" nor "how quickly can I get through these tests". My goal is to write the minimum test code necessary both to prove that the feature I'm testing works the way I intend and to allow me to debug any failures now or in the future.
There's that mix of pragmatism and perfection again. I want to avoid false
positives, so I'm confident that the test tests the specific behavior I want it
to test, but I also want to avoid false negatives, so that meaningless changes
(the order of CSS classes applied to a <div>
element in HTML
changed) don't cause test failures that I don't care about.
Good tests avoid ambiguity as far as possible and embrace specificity where sustainable. It's a design principle I try to keep in mind wherever I write tests. (I find that TDD helps, because it encourages that kind of testability, but I've also found that every month and year of experience I get helps even more.)