When you read about history, the stories are mostly about the people involved, and not as often about the tools and technologies people used. In software engineering, the stories about the people involved are known as “human factors” - and it is often the human factors in the life of a software project that make life interesting for software developers.

My story is about the things our team did well and the things we could have done better during the course of a project that has spanned the past three years, and assuming all goes well, will go into production before this article goes to press. At the end of the project, what stand out are the human factors, far more than any technical issue.

This project was a complete student registration system for a major university’s extension program. This ASP.NET application included integration with a public-facing website, a front-end for users to register a student, an interface with a point-of-sale system to accept credit cards from a card swiper and an accounting back-end. With a project this size you do expect to learn a lot, and believe me, we did!

What Went Right

The goal of every software project is to deliver working software that has value for the customer. From this standpoint, the project was a success and the customer is using the software to operate their business every day. The key elements that lead to a successful delivery of this system were close customer involvement, a solid application lifecycle model, good artifacts, a solid estimate and iterative delivery.

High Degree of Customer Involvement

Software engineering research often pinpoints lack of customer involvement as a significant factor that leads to project failure. In our case, there was no lack of customer involvement. The customer and our customer team were closely involved in all phases of the project lifecycle and were extremely committed to the success of the project, from inception to delivery. Each of the elements that we associate with project success was met: clear vision, commitment by executive sponsors and involvement of key business users. The customer’s key point of contact was available nearly every day throughout the life of the project to provide analysis and feedback on issues and concerns that came up during development. Without this level of commitment and focus, the project would not have been as successful and may have been at risk.

Process, Process, Process

Our team has a clear software project delivery model, and this project was an excellent validation of our model. In the inception phase, the customer engaged us to assess the scope and high-level requirements, and to determine project feasibility. Based on the information gathered during inception, the customer hired us for the design phase to elaborate the technical requirements for the system to be developed. Based on the requirements and design produced, we won the contract to develop the system and then executed iterative detailed design, development and delivery cycles within the development phase. Once we completed principal development, we worked closely with the customer to complete integration testing and prepare for delivery. We followed our lifecycle model and the model works.

Solid Design Phase Deliverables

We worked closely with the customer team and key stakeholders to elicit requirements and develop fully-dressed use cases along with an extensive user interface prototype. The use cases documented what the system would do and how the system would perform without user interface details, and continued to serve as a critical resource throughout the project. In parallel with the use cases, we developed a user interface prototype and then developed a detailed technical specification documenting screens and controls shown in the prototype. We received extensive feedback from the customer regarding the use cases, prototype and specifications, which we incorporated into the deliverable artifacts to ensure that we captured the customer’s intent. The prototype was available online throughout the project and served as the baseline for all features and requirements. Remarkably, looking at the prototype today, one can see that the completed system has only minor cosmetic differences from the features demonstrated by the prototype.

Good Estimates

Following our process, we used several approaches to estimate and we built the estimates based on our experience and existing models. Everyone at every level of the project was involved: developers, project managers and senior management. Our estimation model takes into account the complexity of features and requirements, integration, testing, resources and project management, to name just a few of the factors considered. During development, we were able to prove the validity of the estimate when principal development for each of the features was completed on time and within the estimated time for each requirement.

A Solid Foundation

Every project at PDSA starts with a common architecture based on the PDSA .NET Productivity Framework. Beginning with a standard architecture, common elements and project layers already defined was a tremendous productivity boost, since no additional time was spent in developing and validating the high-level solution architecture prior to the start of development.

All the key elements were defined at the outset and remained unchanged over the life of the project, across two versions of Visual Studio: business layer, data layer, configuration layer, utilities, user interface, reporting, providers and unit test layer. In addition, at the start of development, we assessed project-specific requirements for common features and produced detailed designs and architecture for navigation, validation, user interface base class extensions and an approach for integrating reports. The generalized features and architecture built at the 6outset proved to be very beneficial in the long run, and provided a common foundation that had very few changes over the life of the project.

Incremental and Iterative Development

We delivered a working subset of the features across thirteen milestones. Our team wrote solid code that followed the project design principles and incorporated all of our best practices. Many features were developed, tested, and worked as designed right away. Early releases were somewhat informal, but the process was adjusted and the delivery process became a model for future projects. We met with the customer on a regular basis and provided frequent status reports to key stakeholders. All communication funneled through designated focal points, and all issues and bug reports were tracked using software which ensured that nothing got lost in the “noise” of day-to-day project operations.

Opportunities for Improvement

Every software project, or any type of project, has a few things that could have gone more smoothly or for which one should be prepared. Our team here at PDSA, Inc. is no different. In fact, we find that when we perform an in-depth post-mortem on our projects, we get better at ensuring projects are delivered on-time and on-budget. We are proud of our track record in this regard, but we always strive to be better. So, let me now discuss some of the things that could have gone better on this project and the things we learned.

Culture Clashes

Many of the organizations we work with are entrepreneurial for-profit enterprises, and as a result, our approach to developing and delivering software has morphed to fit the needs of those organizations. On this student registration project, the customer - although fully funded by their own revenue - has a relatively bureaucratized culture and an analytical, academic approach to nearly every issue. These cultural differences were emphasized time and time again. Often, when we requested a decision about an issue, the customer would want to hold a meeting involving several members of their team to achieve consensus about the best approach. Our expectation was that a key decision maker would weigh the factors and provide direction - from our perspective, sitting down for a lengthy meeting was a waste of valuable resources.

We can try to encourage or reward “good” behavior, but a customer’s corporate culture is not going to change overnight. The take away is that we always need to adjust to our customer’s way of life, and we cannot expect them to adjust to us.

Customer Service and the Fixed Price Contract

It is often said that consulting firms should avoid fixed price contracts. However, the customer’s purchasing model required a fixed price contract for the size and scope of the project. Even with the strength of the design and technical requirements documentation we produced, we found that we needed more precise, contractual definitions of what will or will not be done for each feature and for each step in the project lifecycle. Unfortunately, this is hard to balance against being customer-oriented. We are accustomed to telling our customers “Yes, no problem!” when working on a time and materials project.

It is harder to provide good customer service if you are supposed to say “that is not in scope” or “that is not in the specification” in response to an issue reported by the customer. Realizing that we are in the customer service business and not the software business, we determined it is far better to engage the customer in a discussion, meet with them face to face and try to come to some consensus before saying “No” or asking for a change order. Saying “No” often costs more time and effort than saying “Yes” or coming to an agreement on a way forward.

Managing Customer Expectations

At every step of the project, the customer had extremely high expectations. First, being somewhat tongue-in-cheek, one might say that the customer has a perception of our team as a bunch of “rocket scientists” tied to our reputation as a professional software development consulting firm. Although we consider ourselves to be quite good, we are simply experienced software developers who use a good process model and tight standards in order to get things done quickly and deliver working software. However, the customer’s perception of us lead to some instances where they believed that we should “just know” how some feature of the system should work at a time when we did not fully understand the requirements and they were having some difficulty articulating their own business rules. This caused some amount of frustration for both our team and the customer.

We also needed a clear, up-front agreement and consensus about what would or would not require a change order, coupled with the recognition by both teams that change orders are inevitable and even desirable. Third, towards the last quarter of the project, we realized that more “face time” with the customer is extremely important, and could have helped manage expectations and process issues much earlier in the project. Lastly, the customer’s expectations for custom software were that the system would have few bugs when they began testing the system and that the bugs they reported would be resolved more quickly.

Implicit Requirements Are the Root of All Evil

Several requirements were stated using vague or imprecise language in some of the use cases and in the technical requirements artifacts. For example, when a requirement states “create invoices and generate journal records when an invoice is approved” it is documenting what the system will do, but it does not provide sufficient detail about how it will work. Although requirements should never be written at a level that dictates how to code the solution, there is too much room for interpretation when the requirement is too high-level. High-level statements or broad assertions should have been parsed into more detailed components, working towards how the requirement would be implemented without dictating how to code the solution.

Developers Like Writing Code, Not Requirements

During the design phase, we delegated parts of the requirements documentation to development team members, both as contributors to the specification process and as future “owners” of features of the system. Since every member of the team is an experienced software engineer, he or she is qualified to perform multiple roles: analyze requirements, write specifications and produce code. However, there were requirements that went into the specifications as a result of contributions by our developers that were somewhat vague and could have been analyzed more thoroughly. We reviewed and re-wrote several requirements, but in the large volume of documentation (over 900 pages), we could not catch everything. Later on, we found that even a single vague or misleading requirement could lead to hours of re-work on a feature that was not thoroughly understood.

Improving Our Estimation Model

Our estimation model is very good, and as stated above, we developed good estimates for the time it would take to develop the features of the system. However, we need a better model for estimation based on overall level of complexity and time for system integration. In part, the estimation model needs to be tied more closely to use cases, since the use cases provide a good representation of both the quantity of process flows and the overall complexity of each process. In addition, we often rely too much on too few viewpoints, and we could benefit from the practice of other estimation models that help build estimates from multiple points of view.

Our developers need more training about building estimates, and they cannot provide good estimates when they do not understand the problem or requirements thoroughly. If you hear your developers saying “I have no idea” or “I’m not sure because I don’t know how it will work” then the requirements are not detailed enough and they need to spend more time coming up with the questions that will lead them to be able to estimate the work.

Choose Your Milestones Carefully

The selection of which features would be delivered in which milestone was somewhat arbitrary and driven by contractual terms, not what made sense to develop together or at the right time. For example, reports were developed too early in the project lifecycle, simply because the contract milestone listed that several reports were due by a specific date, but at that point no one really understood how the data for those reports really moves through the system. Later on there was a lot of re-work on reports, since once the system was producing data correctly, the reports were wrong. Another approach would have been to build features to satisfy use cases and have use cases the focus of milestones; for example, “All features that support the use case ‘Produce Invoices’ will be ready by this date”. In the future, we would also add a milestone for integration - when all the component or feature milestones were complete, the system was not complete, since those features had not been fully integrated or tested as a whole.

How to Write Better Unit Tests

Unit tests provided a high level of validation that the features were functioning correctly, detected regression when changes occurred and ensured a baseline for quality across the system. When we started, a few of our developers were inexperienced with unit testing, but they learned quickly and the quality of tests improved over time. On this project, we elected to create many data-driven unit tests. In retrospect, we should have used mocks instead, as setting up data to perform tests takes too much time and the tests break when the test suite cannot find appropriate test data for the conditions to be tested. During reviews, we found several kinds of tests that needed improvement.

  • Tests that relied on SQL data were written using conditions which skip the test in the absence of test data, thus causing a false positive.
  • Tests in which multiple tests were called in the same test - if one method fails, the rest of the tests do not execute.
  • Tests with complex setup or initialization, where the setup could fail causing the test to fail.

All of these were the result of developers who were new to unit testing who did not really understand how to write good unit tests. We were able to train and correct as we progressed and the unit test suite remains an important element of the project.

Developer Best Practices

On every project, we always seek out ways to improve our internal process for developing software. Many of our best practices today came from projects we implemented in the past. Here are a few of the recommendations gathered from this project:

  • Always create and use realistic data - data that is as close as possible to what the customer will use in the real world - do not create a customer with Customer ID “1” and then use that record for every test to make your life easier. Also, use a variety of test records when testing.
  • Create detailed samples and templates at the start of the project that demonstrate the approach each developer will use for common features.
  • Ensure team members follow company standards and project-specific standards at all times by performing regular code reviews.
  • Avoid monolithic methods - a structured coding best practice is to have small, meaningful methods that do a specific task; when someone produces a method that goes on for pages, the code is hard to change and hard to maintain. And things change, because no one ever gets it exactly right on the first pass, even if everything else is aligned.

Summary

Overall, the project went well and the system does the job it was intended to do and provides a good foundation for future expansion and new requirements. The customer has an existing portfolio of older .NET Windows Forms applications, the features of which could be added to the new ASP.NET system we developed for them. We learned a lot from this large project, as we have learned from each project that we do. I hope that what you gather from this article is that it is worth it to take the time to do a post-mortem on each project you do. Think about what you can do better and incorporate those findings back into your Software Development Lifecycle process so you do not make the same mistakes next time.