I often say, “Avoid tool salad!” If you hope to achieve greater productivity by reducing delivery lead time and while improving code quality, tools are necessary. Knowing what tool you need when is the critical part of the equation. There are two principal decisions to make. The first is to get clarity about what you need the tool to do. The second decision is whether to make or buy what you need. The second decision is more than just a shopping exercise - can your team build what you need, or would it be cheaper/faster/more interoperable to buy what you need?

There's perhaps no more bias in our industry than in the process of tool evaluation. We can probably presume that you won't choose tools that have failed in past efforts. Although on the surface such presumptions may seem prudent, they can never be proven to be so because if you don't evaluate the tool for the present context, you'll never know for sure whether, despite past efforts, the tool was the right one for this job.

The converse scenario ends up being the worse transgression. It's the old case of “dance with the one that brung ya!” In other words, Tool X worked before, therefore, I'll choose it again despite a different context. It's a natural thing to cling to things that led to past successes. This is especially true when the decision-maker is new to the team. How many times have you been in a situation where something is working perfectly well or well-enough, only to be displaced by the new boss? Well, maybe the tool should be replaced. That gets to the third decision: How that tool should be replaced.

Tools bias is born of fear: fear of change, fear of the unknown, fear of unnecessary spending, fear of creating unnecessary work. There are many brands and types of tools available. A premium tool costs more, sometimes much more, than the generic brand. For a given context, is the premium tool better suited to the task? Or is the generic brand good enough? The premium tool, based on its construction, quality, and longevity, will certainly be good enough. In the final analysis, however, “good enough” in one context may very well not translate into “good enough” in another context. Your organization and its use cases are another, distinct context.

In business, everything reduces to the quantitative measure of return on investment (ROI). Whether we're talking hammers or software tools, the concept of scale applies. A related question to ask is: How well does the tool operate at scale? When running at the upper bounds of its capabilities for an extended period of time, how well does the tool perform? if I use it every day for eight hours a day under all kinds of conditions and load, how long will the tool be fit for purpose?

Another well-known saying is “You get what you pay for.” If I must replace the cheaper tool because it doesn't really do the job or it didn't scale with my product, is the cheaper tool really a better value proposition? When you consider what is “fit for purpose,” you must be careful to fully consider what the purpose is and what the criteria is to determine success or failure. An important first step in that effort is to leave bias behind as much as possible and to start with a clean slate as much as possible. Otherwise, the exercise involves a lot of trial and error by throwing tools at problems, or what I refer to as “tool salad.”

When you review any tool, you'll see that it has a design, whether good or bad, and that design contemplates a range of use cases, whether broad or narrow. Despite what the marketing literature may imply, no good tool is infinitely open ended to work with any use case. A phrase I say often is “marketing is not evidence.” In the pages of CODE Magazine, I've written numerous times: “people, process, and tools; in that order.” To date, I've focused mostly on the first two parts of that troika with tools getting short shrift. The implication is that if you understand your people and process and they rank high on the maturity scale, the tools selection will take care of itself.

That implication, in my opinion, is erroneous for the very reason that the people, processes, and the tools landscape is always in a state of flux. Even if your landscape of people and processes is stable and mature, a purchased tool of any appreciable scope will rarely, if ever, match up perfectly with your current landscape. This is where something must give way to achieve compatibility.

Do you conform your processes to the tool? Or do you redefine your requirements? The tool conforming to your organization by way of customization is typically not a complete option. Most tools today are SaaS offerings with a one-size-fits-all approach. Often, there's an extension option by way of custom modules that the customer must build. But even for those scenarios, the range of customizations and allowable operations is itself not an open-ended universe, especially if the host is a multi-tenant environment.

It's worth emphasizing why a third-party tool is being considered in the first place, which is to avoid custom software development and the associated maintenance. In other words: buy over build. But what if you can't buy what's required? That, in my opinion, leads to one of the biggest mistakes committed in technology management today; a per se presumption in favor of buy, not build. The fallacy is that buying into an existing service is a necessarily better value proposition than building one yourself, even if it doesn't really do all the things you need it to do. Buying may very well be a better value proposition if you're an accountant. But that conclusion can only be supported by how well the use case matches up with the tool's end results.

All too often, tool evaluation is divorced from the use case. The root cause is often too much importance and credibility placed on testimonials and other anecdotal evidence, not bona fide empirical evidence of how well the tool under consideration performs in a comparable environment to your firm's environment. The result can often be upending to your people and process landscape.

Tools often get blamed for failures when the remedy is either conforming the team to the new tool, or trying to extend the useful life of tools that are several years past their expected and reasonable life span.

To avoid such pitfalls, I recommend the following:

  • Don't discount building at least some of your tooling. Part of your firm's competitive advantage resides within your processes. Upending them for some illusory benefit that buying is necessarily better than building can have a deleterious effect on your business. That said, buying may, in fact, be better. The question is how “better” relates to the next, “less better” candidate.
  • Review your current architecture. It may very well be that you don't have a tooling problem. For example, your current CMS may be failing you because you've decided to make it an integration hub, a task that it wasn't designed for. Employing the wrong tool for the wrong job is just double-dipping in the lake of problem generation and technical debt accrual! Instead of turning to throwing tools at a problem, it may well be worth evaluating what you have today and taking a hard look at how those things are implemented. And the same goes for trying to throw bodies at a problem. For anybody who thinks that throwing more developers into a project will reduce delivery lead time, I respectfully direct those folks to read the book “The Mythical Man Month” (1975) by Frederick Brooks. In that regard, tools are no different, and what often results is treating a symptom, as opposed to dealing with root causes, of the problems we encounter with software.
  • Consider drafting a Request for Proposal (RFP) or a Request for Information (RFI). Successful projects require planning, the basis of which are requirements. The same can be said for tools acquisition. Adopting a tool, big or small, narrow or broad in scope, requires careful consideration. Even something as small as an open-source (OSS) library may have a large knock-on effect over time. This can be especially true if yours is a regulated compliance-driven environment. RFP/RFIs also help with avoiding hindsight/confirmation bias. By setting your acceptance criteria up front where such criteria are based on a complete understanding of your use case(s), you're likely to make better, more informed decisions. And if your use case and requirements are unclear, perhaps you should consider taking a step or two back and reevaluating your approach. If you're wondering what technical debt is, it's been hiding in plain sight. You've found it and it's staring you in the face! There's no magic wand to make it disappear!

Above all, understand why a tool is under consideration and a candidate for selection. Even if you can't do anything about that situation, you can, at the very least, prepare for the possible consequences that follow. If any of that causes you angst, I suggest that you ask for the serenity to accept the things you can't change, courage to change the things you can, and, through experience, acquire the wisdom to know the difference. Wisdom is achieved through the experience of making what are ultimately bad judgments! Decisions are never perfect. Despite that, we can still endeavor to make and offer the best informed judgments we can, based on the circumstances. That starts with a healthy dose of intellectual honesty which requires looking at things as they are, not as we wish them to be. This isn't always an easy message to deliver. To aid that effort, I suggest concentrating on benefits achieved with such objectivity for the simple reason that anything else is akin to doubling down on previous mistakes with the likely chance of realizing sub-optimal results. Again.