If you’re going to go to all the trouble to build software system, you’d like to be confident that you’re working from the right requirements. You’d like to know that the product you build has a high chance of satisfying customer needs and will let them get their jobs done in a way they find acceptable and maybe even enjoyable. But without taking the time to validate the requirements, the risk of missing the mark goes up.
Most software developers have experienced the frustration of being asked to implement requirements that were ambiguous or incomplete. If they can’t get the information they need, the developers have to make their own interpretations, which aren’t always correct. Substantial effort is needed to fix requirement errors discovered after those requirements have already been implemented. Any measures you can take to detect errors in the requirements specifications will save you considerable time and money. This article, adapted from my book Software Requirements, 2nd Edition (Microsoft Press, 2003), describes the importance of requirements validation and some valuable techniques to try.
Validation Defined
Requirements validation is the fourth component—with elicitation, analysis, and specification—of requirements development. Validation assesses whether a product actually satisfies the customer needs (doing the right thing). In contrast, verification determines whether the product of a development activity meets the requirements established for it (doing the thing right). Both activities are vital to successful product development, but we will focus on validation here. Requirements validation attempt to ensure that:
- The SRS correctly describes the intended system capabilities and characteristics that will satisfy the various stakeholders’ needs.
- The software requirements were correctly derived from the system requirements, business rules, or other sources.
- The requirements are complete and of high quality.
- All requirements representations are consistent with each other.
- The requirements provide an adequate basis to proceed with design and construction.
Validation ensures that the requirements exhibit the desirable characteristics of excellent requirement statements (complete, correct, feasible, necessary, prioritized, unambiguous, and verifiable) and of excellent requirements specifications (complete, consistent, modifiable, and traceable). Of course, you can validate only requirements that have been documented, not implicit requirements that exist only in someone’s mind. This is why I endorse actually writing down requirements details instead of relying on imperfect human memories.
Validation isn’t a single discrete phase that you perform after gathering and documenting all the requirements. Some validation activities, such as incremental reviews of the growing SRS, are threaded throughout the iterative elicitation, analysis, and specification processes. Other activities, such as formal SRS inspection, provide a final quality gate prior to baselining the SRS. Include requirements validation activities as discrete tasks in your project plan.
Project participants sometimes are reluctant to invest time in reviewing and testing an SRS. Intuitively, it seems that inserting time into the schedule to improve requirements quality would delay the planned ship date by that same duration. However, this expectation assumes a zero return on your investment in requirements validation. In reality, that investment can actually shorten the delivery schedule by reducing the rework required and by accelerating system integration and testing. Better requirements lead to higher product quality and customer satisfaction, which reduce the product’s lifetime costs for maintenance, enhancement, and customer support. Investing in requirements quality always saves you more money than you spend.
Test Thinking
On many projects, testing is a late-stage activity. Requirements-related problems linger in the product until they’re finally revealed through time-consuming system testing or by the customer. If you start your test planning and test-case development early, you’ll detect many errors shortly after they’re introduced. This prevents them from doing further damage and reduces your testing and maintenance costs.
Figure 1 illustrates the V model of software development, which shows test activities beginning in parallel with the corresponding development activities. This model indicates that acceptance testing is based on the user requirements, system testing is based on the functional requirements, and integration testing is based on the system’s architecture.
Plan your testing activities and begin developing preliminary test cases during the corresponding development phase. You can’t actually run any tests during requirements development because you don’t have any software to execute yet. However, conceptual (that is, implementation-independent) test cases based on the requirements will reveal errors, ambiguities, and omissions in your software requirements specification (SRS) and analysis models long before the team writes any code.
Reviewing Requirements
I’m a big fan of performing both formal and informal requirements reviews of growing requirements documents. A requirement might make perfectly good sense to the person who wrote it, but if others don’t understand it, there’s a problem. User representatives are particularly well-suited to validating the correctness of each requirement. Developers and testers can examine requirements to assess whether they understand each one well enough to do their part of the project work based on that requirement. The structured and disciplined technique of inspection provides a way for various reviewers to compare their interpretations and make sure they understand each requirement in the same way. This is something that simply passing requirements out to multiple reviewers and asking what they think does not accomplish.
I’m so enthusiastic about reviews of requirements and other software project deliverables that I wrote a book about it, Peer Reviews in Software: A Practical Guide (Addison-Wesley, 2002). I’ve provided some recommendations about how to get the most out of your requirements reviews in two earlier blog posts titled “Two Eyes Aren’t Enough,” parts one and two.
Evaluating Prototypes
Most people have difficulty envisioning how a new system will look and behave from reading a textual requirements specification. Prototypes are a way to bring requirements to life, to put something more tangible in front of the user and solicit feedback. A prototype represents a partial, preliminary, or possible way you might address a particular set of requirements. Prototypes can be static or dynamic, electronic or paper, evolutionary or throwaway. When you create a prototype, you’re taking a tentative step into the solution space. While not committing you to building the software in a particular way, a prototype is an excellent tool for requirements validation. Users can interact with prototypes, thereby simulating their interactions with the ultimate system, to see if a system based on those requirements would really meet their needs.
In recent years, vendors have developed numerous tools to help automate and streamline this process. You can now buy tools that actually simulate proposed systems, help you quickly build prototypes or wireframe representations of the system, and model the behavior of the system based on a set of use cases or functional requirements. All of these tools are intended to facilitate a user’s understanding of requirements to help validate that those are in fact the correct requirements for the system.
Defining Acceptance Criteria
Software developers might believe that they’ve built the perfect product, but the customer is the final arbiter. Customers perform acceptance testing to determine whether a system satisfies its acceptance criteria. If it does, the customer can pay for a product developed under contract or the user can cut over to begin using a new corporate information system. Acceptance criteria—and hence acceptance testing—should evaluate whether the product satisfies its documented requirements and whether it is fit for use in the intended operating environment. Having users devise acceptance tests is an effective requirements development strategy. The earlier in the development process that users write acceptance tests, the sooner they can begin to filter out defects. In fact, some agile software development methodologies employ user acceptance tests in lieu of writing detailed functional requirements.
Acceptance testing should focus on anticipated usage scenarios. Key users should consider the most commonly used and most important use cases when deciding how to evaluate the software’s acceptability. Acceptance tests focus on the normal courses of the use cases, not on the less common alternative courses or whether the system handles every exception condition properly. Automate acceptance tests whenever possible. This makes it easier to repeat the tests when changes are made and additional functionality is added in future releases. Acceptance tests also ought to address nonfunctional requirements. They should ensure that performance goals are achieved on all platforms, that the system complies with usability standards, and that all committed user requirements are implemented.
Having customers develop acceptance criteria thus provides another opportunity to validate the most important requirements. It’s a shift in perspective from the requirements-elicitation question of “What do you need to do with the system?” to “How would you judge whether the system satisfies your needs?” If the customer can’t express how she would evaluate the system’s satisfaction of a particular requirement, that requirement is not stated sufficiently clearly. However, keep in mind that user acceptance testing does not replace comprehensive requirements-based system testing, which covers both normal and exception paths and a wide variety of data combinations that users might not think of.
Simply writing requirements isn’t enough. You also need to make sure that they’re the right requirements and that they’re good enough to serve as a foundation for design, construction, testing, and project management. Acceptance test planning, peer reviews, prototype evaluation, and requirements testing techniques will help you to build higher-quality systems faster and more inexpensively than you ever have before.
Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars. Karl Wiegers is an independent consultant and not an employee of Jama. He can be reached at http://www.processimpact.com. Enjoy these free requirements management resources.