If we compare a software project to a tunnel construction project, each has two ends and construction begins from both ends with the ultimate objective being that they meet each other at a pre-defined location. For software, there's the business side and there's the delivery side. Each side must have a clear and common understanding of what is to be delivered. That's the project. But what governs the project? The rules we codify, adopt, and enforce.
Rules are a means by which an activity is constrained through one or more requirements. Procedures are the means by which the benefits conferred by rules are realized. Procedures are themselves a form of rule and are subject to rules.
In the legal realm, there exists the Best Evidence Rule for documents sought to be admitted at trial. The rule is simple: The “Original” is deemed to be the best evidence of what the document purports to contain. The rule is codified in the Federal Rules of Evidence (FRE), Rule 1002 - Requirement of the Original. That rule states: “An original writing, recording, or photograph is required in order to prove its content unless these rules or a federal statute provides otherwise.” In plain English, the rule presumes using an original, not a copy. In the event that the original isn't available, the burden falls to the party seeking to admit the copy to successfully argue why they should be permitted to do so. There are other rules that govern what's required to make such an argument and the rule quoted above acknowledges that it's subject to other rules.
What if the original isn't available and only a copy is available? Is it a true and correct original copy? Or is it something else? Rules 1003-1007 address the various scenarios when the original isn't available. There are other rules, such as the Federal Rules of Criminal Procedure and Civil Procedure to which the FRE applies. From a systems perspective, the Federal Rules of Criminal and Civil Procedure each have a dependency on the Federal Rules of Evidence. This is in the context of a criminal or civil trial where evidence is sought to be admitted. What's required to seek admittance? How do we decide what's admissible? If admissible, how do we admit it? This is what a judge is tasked with deciding. And to make those decisions, very simplified, a judge applies the facts to rules and makes a ruling. For the party that lost the ruling, the rules and procedures specify how to preserve issues for appeal and keep the trial moving forward. If the trial can't move forward, it just stops.
A useful, overarching metaphor is that rules should be the sharp edges that keep our work on the correct path. When a rule is violated, it should cause some level of discomfort. For if there are no consequences, what good are rules in the first place? Any software project is subject to constraints, whether they by imposed by the business, engineering, or through governmental regulation. In that regard, software projects are no different in concept than any construction project, legal case, business plan, marketing program, etc. At a certain level, the things humans produce, which includes process and procedures, are alike to the extent that all are subject to constraints, they are undertaken to produce value, and when things don't work, the consequences are noticeable, and perhaps quite uncomfortable. Rules are your compass.
Despite the utility that rules provide, people often see rules and procedures as cumbersome things that just get in the way. On the other hand, people want to know what, if any, rules exist to inform how an action should be undertaken. Whether it's a trial, organizational administration, or a software project, there are many details at play, initiated by different people, and all of which must be compatible with one another. Mature teams employ the discipline and rigor to develop operating rules and principles based on the best evidence that provides direct empirical data, and that allows teams to objectively eliminate or rank alternatives. Mature and competent leadership recognizes that teams must be afforded the time to develop such rules, which requires a lot of evidence gathering.
Rules are your compass.
The gap between what the desired state is, based on rule and actual state, those are crucial data points that don't come for free. It all takes time and thus, incurs a cost. It only makes sense if there's an associated increase in quality. Whether it's a legal or a development team, the thing being acted upon is an undertaking. That's what a software project essentially is, an undertaking. Although the subject matter is different, at an abstract level, all undertakings and projects are the same in that they start, have some period of activity, and they finish. For there to be any objective quantitative or qualitative post-software delivery analysis, there must be some measurable basis to do so. Rules specify how that requirement is to be enforced.
Rules are important, but you may not have anything formalized. When you recognize that, if you're afforded the time to start codifying your rules, start that task immediately. An excellent place to start is with your team's definition of done.
Before delving into a specific example, I want to address the importance of shared context. For the rules discussed thus far, the shared context is the U.S. Constitution. Without a shared context, we're each left to make our own assessment. In a group that needs to collaborate and cooperate in order to achieve some end, our respective assessments need to reflect our individual points of view, but also give way to some common assessment that's based on some operating rule or procedure. The shared context in this case isn't something as broad as “Technology.” That's too broad of an expanse and can only be addressed in an abstract way.
The broad context isn't your specific project either. Your project is made up of several, perhaps many, sub-components, that are all subject to their own rules. And often, there may be conflict among those rules. Perhaps you need to write a gateway for an external API. That's a manageable context for this examination. What are the rules for interfacing with an external API? There are your own development standards perhaps. There are also legal and regulatory requirements (think HIPAA, PCI, SOC, etc.). From project to project and within a project, the goal should be to standardize your decision-making. You might find that relevant facts are entirely dissimilar - even so, the basis on how to decide how to proceed should be consistent. Otherwise, how can you measure a team's performance, especially if they're required to re-invent the wheel each time decisions must be made? Going back to the trial metaphor, from trial to trial, the facts will be different. But the conceptual framework that's independent of any one case is what's used to determine whether a specific document is admissible evidence. In other words, rules become the distilled melting pot of past, specific instances. That's true with the law and it's certainly true in software development.
Assuming that you have no codified and accepted rules and are starting from scratch, one question that needs to be answered is to ask, “for a given set of required response data, what's the required request message data?” That question begets several other questions, not the least of which is authentication. Does the function operate in an acceptable manner and what's the best evidence to answer that question?
The best evidence for answering the acceptability question isn't the documentation. Documentation may be the best evidence for how the function is specified to work. But how the function actually works under the circumstances relative to your requirements, that's the gap that must be closed.
It turns out that documentation, in this context, is irrelevant. The best acceptability evidence is a set of test fixtures that exercise the function within some range of load. The fixture not only demonstrates performance, but the code also explains how to consume the function, whether as an embedded resource or as a call to an external endpoint (API). If you're looking for a better way to incorporate technical documentation into your development, well-written tests, whether they be unit, performance, load, or integration, are your best source, as they not only illustrate what's to be done, but how it's done, and the end result is always success or failure.
The best evidence for answering the acceptability question isn't the documentation. Documentation may be the best evidence for how the function is specified to work. But how the function actually works under the circumstances relative to your requirements, that's the gap that must be closed.
In my experience, the best evidence of whether a function works is its track record of operation under production conditions. In other words, is the API reliable under a given load? These API metrics are important because it's how the team consuming the API can isolate its performance data from the API. It's also important for the cloud engineers when they're assessing how to configure the platform in which the component under development will be hosted. And that gets to why projects benefit from legislation (rules-making) and administration.
Today, development is more complicated than ever. Whether it's distributed teams, the ever-changing OSS used, the cloud platforms themselves, or the legal and regulatory environment, there's an ever increasing need to explain how things work, how bad things happened, and why a proposed remediation is likely to have a benefit. The only way those questions can be answered is if there's the requisite data that was generated as a direct by-product of a process that was, in turn, subject to established and enforced rules.
Let's assume that for test purposes, the function works. What happens when you put it into production? Production is the most important test of all. That's where instrumentation by way of logging and telemetry come into play. But the only way those things see the light of day is if instrumentation facilities are in scope as a first-class feature and the only way to mandate that such features exist is by rule. But rules can't enforce themselves. That's where the people on a development team must hold a common sentiment around why these things are important. Leadership must also share that exact same common sentiment.
The best evidence of a mature and well-functioning team is one that can directly demonstrate A): that it enforces codified and adopted rules and procedures, and B): objectively verifiable high-quality software is delivered in a timely manner to production, subject to those rules and procedures. To the extent that development teams and the businesses collaborate and cooperate where they are each independently reading from the same playbook and are guided by and believe in the same over-arching principles, the software under development is afforded a meaningful opportunity at high quality and success.
That's the question the business ultimately must answer: whether it wants high-quality software. If the business does, it will afford teams the time necessary to cultivate the best evidence to answer its questions and it won't turn a blind eye to rules and procedures. The result? Better, high-quality software deliveries enabled by rules that kept the teams on the right path.