Disciplined software organizations collect a focused set of metrics about each of their projects. These metrics provide insight into the size of the product; the effort, time, and money that the project or individual tasks consumed; the project status; and the product’s quality. Because requirements are an essential project component, you should measure several aspects of your requirements engineering activities. This two-part series, adapted from my book More about Software Requirements, describes several meaningful metrics related to requirements activities on your projects. Enjoy these free requirements management resources.
Product Size
The most fundamental metric is the number of requirements in a body of work. Your project might represent requirements by using a mix of use cases, functional requirements, user stories, feature descriptions, event-response tables, and analysis models. However, the team ultimately implements functional requirements, descriptions of how the system should behave under specific conditions.
Begin your requirements measurement by simply counting the individual functional requirements that are allocated to the baseline for a given product release or development iteration. If different team members can’t count the requirements and get the same answer, you have to wonder what other sorts of ambiguities and misunderstandings they’ll experience. Knowing how many requirements are going into a release will help you judge how the team is progressing toward completion because you can monitor the backlog of work remaining to be done. If you don’t know how many requirements you need to work on, how will you know when you’re done?
Of course, not all functional requirements will consume the same implementation and testing effort. If you’re going to count functional requirements as an indicator of system size, your analysts will need to write them at a consistent level of granularity. One guideline is to decompose high-level requirements until the child requirements are all individually testable. That is, a tester can design a few logically related tests to verify whether a requirement was correctly implemented. Count the total number of child requirements, because those are what developers will implement and testers will test. Alternative requirements sizing techniques include use case points and story points. All of these methods involve estimating the relative effort to implement a defined chunk of functionality.
Functional requirements aren’t the whole story, of course. Stringent nonfunctional requirements can consume considerable design and implementation effort. Some functionality is derived from specified nonfunctional requirements, such as security requirements, so those would be incorporated appropriately into the functional requirement size estimate. But not all nonfunctional requirements will be reflected in this size estimate. Be sure to consider the impact of nonfunctional requirements upon your effort estimate. Consider the following situations:
- If the user must have multiple ways to access specific functions to improve usability, it will take more development effort than if only one access mechanism is needed.
- Imposed design and implementation constraints, such as multiple external interfaces to achieve compatibility with an existing operating environment, can lead to a lot of interface work even though you aren’t providing additional new product functionality.
- Strict performance requirements might demand extensive algorithm and database design work to optimize response times.
- Rigorous availability and reliability requirements can imply significant work to build in failover and data recovery mechanisms, as well as having implications for the system architecture you select.
You’ll also find it informative to track the growth in requirements as a function of time, no matter what requirements size metric you use. One of my clients found that their projects typically grew in size by about twenty-five percent before delivery. Amazingly, they also ran about twenty-five percent over the planned schedule on most of their projects. Coincidence? I think not.
Requirements Quality
Consider collecting some data regarding the quality of your requirements. Inspections of requirements specifications are a good source of this information. Count the requirements defects you discover and classify them into various categories: missing requirements, erroneous requirements, unnecessary requirements, incompleteness, ambiguities, and so forth. Use defect type frequencies and root-cause analysis to tune up your requirements processes so the team makes fewer of these types of errors in the future. For instance, if you find that missing requirements are a common problem, your elicitation approaches need some adjustments. Perhaps your business analysts aren’t asking enough questions or the right questions, or maybe you need to engage more appropriate user representatives in the requirements development process.
If the team members don’t think they have time to inspect all their requirements documentation, try inspecting a sample of just a few pages. Then calculate the average defect density—the number of defects found per specification page—for the sample. Assuming that the sample was representative of the entire document (a big assumption), you can multiply the number of uninspected pages by this defect density to estimate the number of undiscovered defects that could still lurk in the specification. Less experienced inspectors might discover only, say, half the defects that actually are present, so use this estimated number of undiscovered defects as a lower bound. Inspection sampling can let you assess the document’s quality so that you can determine whether it’s cost effective to inspect the rest of the requirements specification. The answer will almost certainly be yes.
Also, keep records of requirements defects that are identified after the requirements are baselined, such as requirements-related problems discovered during design, coding, and testing. These represent errors that leaked through your quality control filters during requirements development. Calculate the percentage of the total number of requirements errors that the team caught at the requirements stage. Removing requirements defects early is far cheaper than correcting them after the team has already designed, coded, and tested the wrong requirements.
Two informative metrics to calculate from inspection data are efficiency and effectiveness. Efficiency refers to the average number of defects discovered per labor hour of inspection effort. Effectiveness refers to the percentage of the defects originally present in a work product that was discovered by inspection. Effectiveness will tell you how well your inspections (or other requirements quality techniques) are working. Efficiency will tell you what it costs you, on average, to discover a defect through inspection. You can compare that cost with the cost of dealing with requirements defects found later in the project or after delivery to judge whether improving the quality of your requirements is cost effective.
The second article in this series will address metrics related to requirements status, change requests, and the effort expended on requirements development and management activities.
Also read Measuring Requirements: Status and Requests for Changes.
Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars. Karl Wiegers is an independent consultant and not an employee of Jama. He can be reached at http://www.processimpact.com. Enjoy these free requirements management resources.
- Best Practices for Change Impact Analysis - September 12, 2022
- Characteristics of Effective Software Requirements and Software Requirements Specifications (SRS) - May 30, 2022
- Defining and Implementing Requirements Baselines - June 18, 2019
Right on the mark Karl! I really like the piece on Requirements Quality. Specifically, the tip on efficiency vs. effectiveness. I’ve struggled with how to measure this in the past, or, if it is even worth it. However, your calculation tip has help me look at this in a new way. Thanks!