A product development team’s success or failure hinges on the many decisions it makes throughout a development cycle. Those choices are influenced by a myriad of factors, including balancing timing, regulations, production costs, customer feedback and benefits to the end user.
Given the complexity of products today, it takes multiple team members to weigh-in on key decisions. And the number of decision points are only growing as products get more complex, making it even tougher to adequately weigh all the options and trace their impacts.
Decisions under pressure: Making an already complex process even tougher
Those who have been through crunch time know the volatile element hanging over all decisions throughout development is pressure — whether it’s related to deadlines, complexity or the organization. Here are some examples.
Decision Pressure = Not Enough Time
Depending on the number of stakeholders, their schedules, and level of involvement in the development process, receiving input on key changes or milestones can be an extremely tedious and time-consuming endeavor.
Getting multiple parties to sign off on a plan traditionally takes time, and it becomes nearly impossible if circumstances change while working through an especially sluggish sign-off process.
Decision Pressure = Not Enough Data
When debating decisions with team members — whether they’re executives, engineers or interns — good data strengthens the argument.
Running on instincts works for certain things, but if you’re constantly making tough calls without solid data, there’s a high probability of hitting problems— such as rework, delays and failures— later on.
Having visibility into the data used to define original requirements, as well as any new information causing requirements to change, is essential.
Decision Pressure = No Visibility Into Impact
Any new decisions must also take into account how they will impact the original requirements. One tweak to a requirement may cause ripple effects that impact the product in unintended ways.
Issues may be uncovered in the testing stage, but if one tiny change means a complete redesign, you’re going to miss market opportunities and blow past your budget.
Modern Traceability Relieves Decision Pressure
The practice of traceability was created to demonstrate that good decisions were made throughout development. While it’s a concept that has been around a while, it has had to evolve to keep up with a transforming, increasingly complex and time-sensitive product development process.
Building off the gains of the past, modern traceability is a new way of handling the process that’s built to support how people think and work.
Critically, it’s focused not just on the actions of an individual person, but entire teams over time. And this provides many benefits.
Less Pressure from Connected Data = Saved Time
Having a platform to power modern traceability is crucial. At a minimum, it needs to record, share and display data.
Crucially though, it also must be easy to use — so anyone on your team, from any experience level, can take advantage of it immediately with only a small learning curve.
Using modern traceability tools allows you to quickly show how the work being done is related to the company’s overall goals, which speeds up review cycles.
Less Pressure from Connected Data = On-Demand Context
Instead of data only being available at the end of a project or during major milestones (when people have the time), modern traceability’s data is continuously updated by the entire team and it’s always live and accessible.
For example, if all requirements have direct links to the original intent of the organization for a project, it’s easy to see whether a change is in alignment or deviating from that goal.
Simply look upstream from a requirement (or test, or design, etc.) to see why that piece of work existed. This provides valuable context ahead of any decisions to make a change. Have questions about why? Ask the person who added that upstream content directly.
Less Pressure from Connected Data = Clarity on Impacts to Requirements and People
While traceability practices have always connected test data to requirements, modern traceability delivers visibility into how people are connected to, and impacted by, changes to ongoing development work.
When a test fails, not only is it possible to see what requirements are impacted, modern traceability also offers the ability for affected parties to be notified immediately. No waiting for a major milestone review when pivots are costly. It amplifies the effectiveness of collaborative work already being done, and works best when you’ve got the right tool to utilize it.
The benefits of modern traceability are increasingly becoming essential for teams serious about creating better products with less waste and timelier cycles.
With modern traceability, you can connect data to make informed decisions faster and do your job better.
https://www.jamasoftware.com/media/2017/11/Header-Image_-Blog-Post-1-1.png573856Robin Calhoun/media/jama-logo-primary.svgRobin Calhoun2017-11-13 10:40:092023-01-12 16:53:27How Adopting Modern Traceability Leads to Better Products
Too often products fail due to poorly managed requirements. A requirement is a document that defines what you are looking to achieve or create – it identifies what a product needs to do, what it should look like, and explains its functionality and value. Without clearly defining requirements you could produce an incomplete or defective product. It’s imperative that the team be able to access, collaborate, update, and test each requirement through to completion, as requirements naturally change and evolve over time during the development process.
There are four fundamentals that every team member and stakeholder can benefit from understanding:
Planning good requirements: “What the heck are we building?”
A good requirement should be valuable and actionable; it should define a need as well as provide a pathway to a solution. Everyone on the team should understand what it means. Good requirements need to be concise and specific, and should answer the question, “what do we need?” Rather than, “how do we fulfill a need?” Good requirements ensure that all stakeholders understand their part of the plan; if parts are unclear or misinterpreted the final product could be defective or fail.
Collaboration and buy-in: “Is everyone in the loop? Do we have approval on the requirements to move forward?”
Trying to get everyone in agreement can cause decisions to be delayed, or worse, not made at all. Team collaboration can help in receiving support on decisions and in planning good requirements. Collaborative teams continuously share ideas, typically have better communication and tend to support decisions made because there is a shared sense of commitment and understanding of the goals of the project. It’s when developers, testers or other stakeholders feel “out of the loop” that communication issues arise, people get frustrated and projects get delayed.
Traceability & change management: “Wait, do the developers know that changed?”
Traceability is a way to organize, document and keep track of the life of all your requirements from initial idea through to testing. By tracing requirements, you are able to identify the ripple effect changes have, see if a requirement has been completed and whether it’s being tested properly, provide the visibility needed to anticipate issues and ensure continuous quality, and ensure your entire team stays connected both upstream and downstream. Managing change is important and prevents “scope creep”, or unplanned changes in development that occur when requirements are not clearly captured, understood and communicated. The benefit of good requirements is a clear understanding of the end product and the scope involved.
Quality assurance: “Hello, did anyone test this thing?”
Concise, specific requirements can help you detect and fix problems early, rather than later when it’s much more expensive to fix. In fact, it can cost up to 100 times more to correct a defect later in the development process after it’s been coded, than it is to correct early on while a requirement. By integrating requirements management into your quality assurance process, you can help your team increase efficiency and eliminate rework.
Requirements management can sound like a complex discipline, but when you boil it down to a simple concept – it’s really about helping teams answer the question, “Does everyone understand what we’re building and why?” When everyone is collaborating together and has full context and visibility to the discussions, decisions and changes involved with the requirements throughout the product development lifecycle, that’s when success happens consistently and you maintain continuous quality. Not to mention the process is smoother with less friction and frustration along the way for everyone involved. And, isn’t that something we’d all benefit from?
/media/jama-logo-primary.svg00Melissa Tatham/media/jama-logo-primary.svgMelissa Tatham2017-01-19 09:00:072023-01-12 16:54:38Requirements Management 101 & Why Successful Teams Do It
Recently I decided it was time I improved my cooking skills. Being an analytical person, I spent a considerable amount of time deciding on an approach. One must have a strategy, measurements for success, and a repeatable pattern of course! (Right?) Given that I like to run repeated experiments, I decided to take a set of dishes I wanted to master, find a few variants (similar recipes), and repeat them until I understood what specific ingredients, tools and techniques were essential.
The act of repeating recipes itself turned out to be the valuable lesson. Following the steps, not isolating the science behind it each decision, allowed skills to be internalized in concert. There is no single essential technique, or secret ingredient. Having a full toolbox of interrelated skills and past decisions to call upon is what works. While it’s hard to measure the exact causes for success, my larger goal is being met as my cooking improves!
Using modern traceability in product development, those that allow you to connect data and people across an organization, follows a similar pattern. Some complex situations call for traceability recipes, others just common sense. It’s a collection of related tools and behaviors used for a purpose – successful product delivery. It’s flexible, adaptable, and evolving to keep up with the demands of building high quality products fast. While I might have tried to limit or isolate traceability like it’s a single secret ingredient, I’m finding it’s more valuable to consider its many forms together as I did learning to cook.
Below are some of the goals our customers have found traceability can in fact solve. Recipes from master chefs, if you will.
Finding the Source of a Decision – Before you get to work making a change, use traceability to understand the why behind decisions.
Use Modern Traceability to keep conversations connected as context, and do so continuously. This reduces the time required to find the source of past decisions, and doesn’t rely on flawed human memory to answer the question “why did we decide that again?”
What’s connected: Track decisions associated to requirements changes as closely to the requirement itself as possible, such as in the comments. Use tools like Jama’s Review Center to keep comments all related to the same set of data clearly saved in one spot, and referenceable later.
Adapting to Challenges and Change – When a major change does need to happen, easily see the ripple effect up and downstream at any point in a project, not just milestones.
Use Modern Traceability to see potentially risky changes coming. When you track and relate requirements as you work, it’s much easier to see the impacted data when a change is proposed. Teams can adapt more quickly because the map of how your product is built exists throughout the project, not just at major milestones.
What’s connected: Associate people to the requirements themselves. Use this to quickly see who’s related to data, tests, requirements, etc. connected 1-2 levels in either direction. Notify connected people automatically when major things change.
Managing Risk – Keep track of risks and mitigations as you work, in a shared tool so re-use of similar data is easy and visibility is high.
Use Modern Traceability to reduce the heavy lift of managing risk data. Update your tracking of risk dynamically, tied to requirements, and visible to the entire team working on your product. Generate a view of how you’re doing along the way, and share it long before an audit.
What’s connected: Configure your teams’ traceability map to include links from requirements to risks, mitigations, environmental context, and test data.
“Are we there yet?!” Status Updates – Everyone needs to know how the team is doing, at different times and at different data granularities.
Use Modern Traceability to shared dynamic views of progress, at the level of data that makes sense for the audience. Skip generating manual static reports, and instead share live, accurate ones.
What’s connected: For this to work you need a common language, and that is derived by connecting all the levels of product data so everyone has a familiar anchor point. Create relationships from the highest level market requirements, to draft designs, to requirements, to passed test in Jama. This gives every use the ability to pick a data type they are familiar with and see progress at that level, whether that means seeing the status of the requirements a marketing goal decomposes to, or looking at all the downstream test status for a particular hardware component.
Referencing Similar Past Projects – By maintaining data and relationships throughout a project, by the end that project will be full of rich insights that can be used in the future.
Use Modern Traceability to look at past projects as a whole, across all the data types from requirements to comments. Find projects that were successful, and use that as a starting point for new projects.
What’s connected: Everything! Data should be explore-able, like a map, so anyone can self-serve when they want to know answer to questions like “what did we do last time?”
The product development world is getting more complex, time pressured, and all in a changing environment of rules and regulations. To keep up, your traceability practices need to adapt, to take into account how humans and teams actually think. As your team adopts new traceability practices, though, I humbly encourage you to approach it like learning a complex skill such as cooking. It’s not any one practice, ingredient or tradition that leads to success. Think of how many moving parts there are on a successful team release! Integrating traceability skills and tools into daily work in a way that continues to value traditional Traceability (we still need reports for regulatory bodies, for example!) but also leaves room for new complex skills to emerge that mirror your specific favor of product delivery!
Read Forrester Report about the use of Modern Traceability and how it improves developers’ ideas, processes, and software.
/media/jama-logo-primary.svg00Robin Calhoun/media/jama-logo-primary.svgRobin Calhoun2016-11-17 15:25:262023-01-12 16:54:48Traceability, Product Delivery Data, and Learning to Cook
Using stakeholder, system, hardware and software requirements to build a professional wireless microphone.
In the post below—the last of three transcribed from his Writing Good Requirements workshop, with notes and slides from his presentation deck included—Jama Consultant Adrian Rolufs explains common problems teams go through, and how to avoid them. (See Part II here.)
==Start Part III==
Let’s look at an example product using my audio background. I’m going to take a circuit that goes into a professional wireless microphone—the kind of high-performance microphone you’d see someone on a stage, like a MC or a musician, use.
It’s got to be able to handle a wide, dynamic range, meaning it has to be able to record very loud signals and very quiet signals, all with very high quality, and it’s got to be powered off of a battery so that it can he handheld, meaning the connection to the system will be wireless.
So we’re going to talk about some of the requirements that go into the chip; one of the main chips that goes into a solution like this.
First we’ll start at the market or stakeholder requirements level. Often, they’re called stakeholder requirements because stakeholders can be more than just customers.
In most product development organizations customers have requirements, but internal teams also have requirements.
So if I’m building a chip, for example, I have quality requirements that my quality department is going to dictate, but will also be influenced by the customer’s requirements.
And I probably have a production test organization that has to test every one of these devices as they go out the assembly line.
These devices are going to have requirements concerning what kind of access they need to internal circuits, and what kinds of circuits they need to enable them to test in a timely manner—things like that.
The development team might also have requirements; for example, they need to be able to reuse certain amounts of existing circuitry to stay on schedule, or requirements around data costs.
The point is that what we call stakeholder requirements is really a broad category. It could be anybody who has an influence on the product development.
Let’s look at some examples—these would most likely come from customers—which would be focused on the functionality and performance of the device. I’ve got three examples here: One is good and recommended; two are not.
We’ll start with the first one. Say we need a product that can input a microphone signal, convert a signal with two digital audio signals, have two different gain stages and consume less than 20mA while operating.
This is the sort of thing you’d likely hear directly from the customer but not necessarily the sort of thing I’d want to write down as requirements.
This brings up a couple of issues.
First, I’ve got a bunch of requirements mixed together, so it would be easy to miss something, and also it presupposes a certain solution.
It could be this is the right solution, but it assumes certain solutions, so it’s talking about internal details that are over‑constraining the design team.
The team can come up with a different gain structure that works and achieves the results, but doesn’t use 20dB and 0dB of gain. What’s wrong with that? Why do I need to over‑constrain them?
So those are some of the problems with the first one.
Second, customer X needs a 140dB microphone amplifier with a digital output for less than 50 cents. The microphone amplifier shall be low power.
This is the sort of thing marketing might write, because it’s focused on the customer’s request: They need it at a certain cost and everything should always be low power.
It’s very difficult to actually meet these kinds of requirements.
140dB—well, what is that? That’s just a ratio number; I don’t know what that actually is a measurement of. I need some more specificity around that.
As for 50 cents, you have no idea what the solution is yet so 50 cents may or may not be achievable, but it’s good to know.
And then the last one, low power; that can mean almost anything. Low power in one industry could be high power in another, so specificity around what low power means would be beneficial.
So in that case, the first example is more specific and has more detail—although both of the first two are not very atomic so it would be easy to miss things.
The last example talks about two things.
The first one has a problem statement. I love problem statements because they really tie back to the value the solution can offer. It’s giving me some context around what’s in the market today and what the problem is.
It’s saying in the market there are high dynamic range microphones which transmit digitally, and it requires circuitry that’s expensive and large or high power to obtain the necessary performance.
And from that I know that a solution is out there, but that solution is difficult, hard to use or hard to implement, and it can be expensive and it may or may not provide the necessary performance.
You can see how this helps outline the idea of what kinds of problems I need to solve and where the most value is in design.
So based on this, I would know that hitting the audio performance is important, and getting a small solution size that’s low power is also important; those are the key constraints.
To make that have a specific power consumption, say the solution shall consume less than 75mW while in operation.
Now, the other benefit here is 75mW; it’s an actual power number, whereas in the first example I had a current but without knowing the voltage I don’t what the power consumption is, so that’s also not a great example.
So in this case, the last one is the one I would recommend; it has more constraints and a good set of stakeholder requirements. With that, the design team has a good idea of what their goals are, but they’re not over‑constrained.
Now for the next level of detail: Once we have a set of stakeholder requirements, or at least a draft, we can start looking at system requirements. The system requirements are what we’re actually going to build a product against.
We’re not going to build a product directly against the stakeholder requirements because we could have multiple stakeholders and we need to consolidate their requirements into one set.
Or, certain stakeholders may ask for things that we actually end up not satisfying, but we still know that we can build a successful product.
So that translation from stakeholder requirements to system requirements provides the clarity and explicit decisions around what we’re going to do, what we’re not going to do, and what the actual requirements are for this project.
Now, one of my favorite examples of system requirements is the first one—absolutely nothing—and I see this time and time again.
I can’t count the number of times where I see people skipping the system requirements when they’re building a system.
If you’re an engineer responsible for low-level details, how do you know if those low-level details are the right details? Well, you need system requirements first, so we definitely don’t want to skip this level.
Now, the next one: The solution shall have two differential inputs using instrumentation amplifiers. The instrumentation amplifiers shall be followed by sigma‑delta ADCs. These are really low-level component requirements.
We’ve already jumped to the conclusion that we’re going to have a specific architecture in the hardware. What if part of the solution needs to be software? We haven’t said anything about that and we could already be over‑constraining the design team when some other architecture would be more appropriate.
It could be an instrumentation amplifier is not the best choice. We don’t need to constrain that at this level. So the last example here is really a better example of system requirements.
What about power consumption? It’s going to 20mA while in operation. I said before, current on its own is not necessarily the best example. With this you would typically provide a supply voltage range so then it would become clear.
What about the signal levels? Stating what the signal range needs to be provides a lot of detail around what the architecture of the design needs to be, without over‑constraining it.
And then, the overall end‑to‑end, signal‑to‑noise ratio: 140dB A‑weighted gives me a very clear statement of the overall performance, again without over‑constraining.
So for system requirements I like that last one.
These system requirements are all focused on the performance of the signal path. There should also be some system requirements here that talk about constraints on size, constraints on packaging and things like that.
Now we know we need to build something that consumes relatively low power, takes in a very wide dynamic range signal and maintains the quality of that. So we can start talking about architectures in response to these system requirements.
Let’s say we use a hardware device that has analog to digital conversion with two signal paths, both of which have medium performance but which we can combine together to obtain high performance, and that’s actually the common solution in the application.
And then we use a software algorithm to combine those signals, so we’re going to need a DSP to run the software algorithm and process the signal to output this resulting signal of 140dB A‑weighted signal noise ratio.
Based on that, we can now talk about the hardware‑specific requirements.
Here are some examples of different possibilities. The first one is a block diagram of the architecture.
I’m visual; I love block diagrams. I love schematics because they’re very intuitive. I can relate to them very well. They don’t make good requirements, unfortunately. It’s very difficult to test a diagram. It’s very difficult to make sure you didn’t miss anything in a diagram. So having a diagram on its own is not sufficient.
A really good solution is a diagram complemented by a set of requirements that attach to every important detail of that diagram.
That way, visual people have something to see, but we also have atomic requirements that we can test against and trace to make sure we didn’t miss anything, and also so we can manage changes.
If I make changes to this diagram based on changes to the architecture or customer requirements, it might be hard to actually know what those changes were, whereas if I have individual requirements I can track, I can easily know.
The second example is just a description of functionality, a response to the requirements. This is saying what the signal path of the device is; the architecture is describing a specific part number. This is down in the design descriptions where this belongs. It’s not hardware requirements.
So the last example is one I like for hardware requirements. We’re talking power consumption; we’re getting more specific.
We know that I’m building a chip, the power consumption is going to vary and I want to know what it’s typically at and what its maximum can be, so we’re specifying that.
Again, we’re repeating the input signal level because that input signal level was a requirement on the system that’s also a requirement on the hardware.
There is some duplication, but it’s there to explicitly say that this is a requirement on the hardware. I won’t see a requirement for 17uV RMS to 1V RMS on the software, because the software is never going to know about volts; it’s going to know about digital signals.
So even though there is duplication it’s done to make the decisions and the traceability explicit. So then I have requirements on the specific architecture.
Now that we’re down at the low-level and component requirements, the hardware requirements, we can start talking about specific solutions. We’ve got to get into the details of what the solution is actually going to be.
So in this case, in the hardware requirements, you’ll likely see requirements that dictate a certain solution, but that’s okay because it’s quite likely that the design team is the one writing these requirements, so they’re the right ones to make that decision.
As you probably have guessed by now, the last one is my recommendation for well‑written hardware requirements.
The last example is software requirements.
I see a lot of teams that just skip software requirements entirely and go straight to writing code. It’s really fun to write code, really satisfying, but if you don’t have any requirements, you’re starting without clear directions. We need some requirements.
The second example, some descriptions of functionality, is written as a shall statement. It sounds like a requirement but I’ve got a bunch of stuff mixed together.
I’m talking about two signals. I’m talking about what their performance is. I’m talking about the output. There is too much stuff mixed together here, so the third one is the recommendation: talking about specifics.
I am going to develop this software for a specific DSP, the Tenscilica HiFi 3. It’s going to perform a specific function. It’s going to take two audio signals and it’s going to combine those into one.
(Note: I’d probably need more detail around this. This is probably not enough by itself but I didn’t want to fill the screen with the requirements.)
And then, what’s the sample rate going to be? This algorithm is going to be designed for a specific sample rate or multiple sample rates. It’s an important characteristic of the algorithm. Let’s make sure that’s captured in requirements.
So that is exactly what I would recommend, and in each of these there are a lot more requirements that go along with this. These are just a couple of examples in each category.
Many teams mix up requirements and specifications. It’s very common.
You need to make sure you have a clear understanding of each of them and when to use them. It’s not always easy to decide which one is which, so it’s absolutely critical to have that discussion with your team.
What I see a lot of teams doing is skipping levels of hierarchy, jumping straight from high-level customer requirements down to detailed requirements or detailed specifications. Do that and you’ll have a very difficult time proving that you built the right thing.
It could be you’re operating fast and loose and you’re okay with that. Maybe that’s okay for a very small team in a very small organization. But in every other situation, it’s unlikely that you’d be building something so uncomplicated that you could get away with it. It’s high-risk.
So make sure that you have at least stakeholder requirements, client requirements and some kind of detailed specifications.
That’s the bare, bare minimum for any kind of product. More likely you need more.
What I recommend:Make sure you have a clearly defined process with clear levels of your requirements. If you don’t think you have that, discuss it with your team. What levels do you need? Which one of those diagrams [more in posts Part I and Part II] would be appropriate for your project?
And then there’s the scary question: Do you even use requirements?
Some teams plow ahead without requirements. Think about what kind of problems that can cause:
Maybe you’ve built products that have not been successful, that maybe needed a late change or maybe even failed, and you had to develop a new product in order to be successful.
Perhaps you’ve been in a situation where you later learned you missed some important details along the way and realized that you barely got away with it. That’s high-risk too.
When you start with an understanding of the roles different levels of requirements perform, you’re less likely to invite risk and add complications during development, and are much more likely to build the right product.
==End Part III==
/media/jama-logo-primary.svg00Marc Oppy/media/jama-logo-primary.svgMarc Oppy2016-09-22 15:08:092023-01-12 16:54:55How to Write Good Requirements, Pt. III
Key differences between requirements and specifications, why different levels of requirements are important, and how to establish a clear requirements hierarchy you can use and change to suit any product, version or variant you build.
In the post below—the second of three transcribed from his Writing Good Requirements workshop, with notes and slides from his presentation deck included—Jama Consultant Adrian Rolufs explains common problems teams go through, and how to avoid them. (See Part I here.)
==Start Part II==
These days, products are so complicated they can only be used in specific scenarios and for specific applications, which means that if you don’t build a product right, chances are there’s no home for it.
Potentially millions of dollars of development efforts, not to mention sales, are lost if you’re unable to thoroughly keep track of the requirements all the way through.
So we’re going to talk today about some ways to avoid those problems and really set yourself up for success.
What’s most important is having a systematic process to follow; you want a logical progression that takes you from the high-level to the low-level details in a structured way, because that leads to the best results.
It’s actually more important than how you write the requirements.
So the first key point I want to make concerns differentiating between requirements and specifications, and here the word “specifications” is a nebulous term. It’s used differently in different industries.
In many, “specifications” means a document that contains something, a requirement specification, a verification specification or a list of verification test cases.
By the way, I’m using the term “specification” here as the semiconductor industry does. The specification is a list of the performance, the functionality and the features of the solution; it’s the end result. It documents what you actually produce.
In many cases, there is a document called a datasheet that’s the customer facing version of this. So if you’re familiar with datasheets, think of the specification as the datasheet.
For the purposes of this discussion, here are the differences between requirements and specifications:
Requirements
Requirements reveal what the product needs to do
The tool we use to identify the right product to build and to ensure we’re building it right
The tool we use to communicate internally about what the product needs to do and how it needs to work
Specifications
Specifications detail what the product actually does
Specifications are not useful to identify the right product to build
The tool we use to communicate externally about what the product is and how it works
Typically, requirements are a little higher-level and a less explicit than specifications.
But when you combine the two, what you get is a clear statement of a need and a clear statement of what you’re going to do to satisfy that need.
In doing so, you document exactly what you’re doing and why, and this helps capture the decisions that are made along the way and why they’re made.
However, what I typically see is a document that has intermixed requirements and specifications.
It’s an easy and logical way to write, but it’s very difficult to refer back to afterward for facts and analysis.
So what you end up finding out is, although what you did made sense at the time, you missed some things along the way. There were some high-level requirements that you’d forgotten about.
For example, I was recently working on a product that had only 30 requirements, but discovered that when I wrote the documentation and the specification, I missed one, even though I had written the requirements and solution myself on the same day.
It’s very easy to miss things without a systematic approach in place.
I found what I’d missed only because I’d built the traceability from my specification back to the requirements. I had to prove that I’d met every single requirement and that every one of my specifications was there because of a requirement.
By doing that, it reminded me of something that I missed, so it really saved me some trouble.
This oversight may have come up at some point during reviews, but maybe not, because it’s impossible for anybody to remember every single detail.
Having the requirements separate from the specifications and traceability links between them is critical for making sure you don’t miss anything or end up with features that you don’t need, which add cost or schedules to the product.
Separating the two is difficult if you’re not used to writing that way. Often, I will write, or other people will write, in a traditional kind of document style, and then extract from that what the requirements and the specifications of the solution are.
In other words, you can take an iterative approach to this, and that’s totally valid.
Now, the next question is, how do we get to the right solution? The answer is, by having a clear hierarchy.
So what I’m showing here is a basic hierarchy with market requirements and product requirements. It’s probably the simplest level of requirements you can possibly have in any product development.
The market requirements capture what the customer needs and what the market as a whole needs, and the product requirements say what the requirements are, for the product that we’ve agreed to build.
We can trace back to those customer requirements in such a way that we can prove that the product we’re building is going to satisfy the market requirements, and that we don’t build anything extra.
This is the basic minimum.
You can think of each as a documentation task, but they also follow the phases of your project. When you’re capturing marketing requirements you’re also thinking about what possible solutions you could be developing to satisfy those market requirements.
You’ll likely come up with product concepts, or maybe just one product concept, depending on the situation. And so you would capture, in addition to the requirements, some architectures or concepts that go along with that; that’s the “black box” for all the market requirements.
Same for the product requirements. Once you have them—or while you’re writing them—you’re thinking about the architecture of your solution and the trade-offs you might need to make.
This informs what requirements you can satisfy and which ones you can’t.
By writing the requirements in conjunction with coming up with a design, when you’re done, you have a clear statement of requirements and a solution that can meet them.
Before I came to Jama I was an engineer, coming up with new products, and I sometimes focused only on the product concept and the product design, and skipped a lot of the requirements.
It’s easy to fall into that trap. Engineers love solving problems. We don’t love writing down the requirements for solving those problems. But without those requirements we don’t know whether our solution is the right solution.
Some teams might have, say, only market requirements and no product requirements, or vice versa.
But what they don’t have is a clear distinction between what the customer asked for, or what the market needs and what the team is doing to address both.
As a result, it’s difficult to know whether they’re building the right thing or not.
Now, if your product is complicated you add hierarchy to this model.
Let’s say, for example, I’m doing chip development and my chip has a whole bunch of different internal blocks that are all each fairly complicated in of themselves.
Well, then I can add another level of hierarchy, which I’ll call block-level requirements.
A block requirement would be probably something specific to a chip, or a system where you have a hardware device and it’s made up of sub-circuits.
For example, say I have a digital chip that’s a microcontroller. One block might be a digital interface. Another might be the memory. Another block might be the analog interface.
Or, say I’m building a bigger system, and Engine Control Unit, or ECU, for a car. The ECU is my system. And that ECU is made up of a microcontroller and interfaces; they are components of the system.
Whatever you’re building, you want to break it up into logical pieces; those are your components—which you’ll be wanting to write component requirements for.
So product requirements would describe what is needed from this whole chip overall, and that chip, for the purposes of the requirements, is really best thought of as a “black box.”
But then the block-level requirements say, now that we have a product architecture in mind, what the requirements are for the individual pieces. The designers are going to go and design against those block-level requirements.
For example, product architecture says we’re going to have an ADC, an Analog Digital Converter.
We would then need block-level requirements to say what the performance for this ADC is: What does the power consumption need to be? What does the size need to be it needs to fit into a certain space? What kind of input signals and outputs signals does it need to have?
Things like that.
And then the block design would tell me how this ADC is architecture. What’s the topology? What circuit components are coming together to satisfy those requirements?
Again, having both of those pieces of information is critical.
In this example, what sometimes happens is the product requirements section gets skipped. People already know the architecture, to a certain degree, and so they jump right to the block-level requirements.
The problem with that is market requirements are very high-level and block level requirements are very detailed, so skipping requirements means teams can’t forget a single thing during building.
But the most serious problem is having no traceability back to product requirements; without it, teams can’t confirm the connection between block-level requirements and market requirements.
Without traceability, it’s difficult to know for certain if this block-level requirement traces to that particular market requirement.
You end up missing things, so each of those levels is important.
Now, it could be you are building something even more complicated, so you need to add levels of hierarchy.
Basically,the more complex your build, the more hierarchical structure you want in place.
Here’s a system example; in this case we have both hardware and software, so we have system requirements that describe the overall needs of the system, and then we have an architecture that says, what’s going to happen in hardware, and what’s going to happen in software.
And based on that architecture, we can then write requirements for the hardware and for the software.
We can architect the hardware and the software, and then we can again write low-level requirements for the individual pieces of hardware and the individual pieces of software, and then write the design details that go along with each of those blocks.
So again, you take a systematic approach going from high-level customer needs all the way down to design, and you just adjust this based on the levels of complexity of your product.
And as your products get more complicated, it’s entirely possible that you start off with something simple and you add complexity to the next generation, and maybe you add even more complexity in the next generation, so you have to adjust the model based on your product complexity.
But it’s very unlikely you’ll use the same model forever.
When I was an engineer, we were really more focused on market requirements, product requirements and the block requirements model. Recently I’ve seen a lot more of the system requirements, especially in the chip industry.
Many of the products coming out of chip companies these days are more like systems than ever, so this model ends up being a good place to start.
You can cut out pieces out that you don’t need, but make sure you have accounted for all the pieces that you do need. Having that discussion with your team is really critical to setting up the right model.
==End Part II==
/media/jama-logo-primary.svg00Marc Oppy/media/jama-logo-primary.svgMarc Oppy2016-09-15 09:52:202023-01-12 16:54:55How to Write Good Requirements, Pt. II of III
Finding the requirements management sweet spot means being concise, specific and parametric, and answering the question, “What do we need?” rather than, “How do we fulfill a need?”
In the post below—the first of three transcribed from his Writing Good Requirements workshop, with notes and slides from his presentation deck included—Jama Consultant Adrian Rolufs explains common problems teams go through, and how to avoid them.
= = Start Part I = =
Today I’m going to be speaking about product definition and how to ensure that you are using requirements correctly and to maximum benefit.
A bit about myself: For the first 10 years of my career I worked in the analog and mixed-signal semiconductor industry, first as an applications engineer and later as a product definer.
As a product definer I became a customer of Jama Software. Adopting Jama to manage requirements completely revolutionized how I built products. So much so, that a couple of years ago I joined Jama Software to help as many teams as possible benefit from using Jama.
Product development teams face many challenges, but the ones we are going to focus on today are how to systematically navigate the path from a high-level market need to the specification of an actual product.
Specifically, we’ll be looking at this challenge in the context of development teams that work from a specification, but the concepts apply to all development.
So, the ultimate question is, really, why do we write requirements at all?
Requirements are a tool that guides the journey from the vast number of possibilities for products we could build, to determining whether the product we want to build is going to be successful or not, down to picking exactly the right product we’re going to build.
Particularly in systems and hardware development, designers are typically not able to start developing a product until there are sufficiently detailed requirements, or possibly even a specification.
So for the purposes of the discussion today, I’m going to use the term product specification to mean the detailed document that describes what the actual product is, the final result of the development process.
One way of looking at this challenge is shown below. Imagine that the orange circle is every product that a given team knows how to build, and the blue dot is the exact specification for a particular product. The specification defines exactly which product the team will produce.
Let’s say we have a product we’ve developed, and we have the specifications that go along with it that tell us exactly how it works, how well it performs, size and cost, and requirements are the guide we use to get through those.
In many industries, you have to have a fair amount of detail fleshed out before teams and resources get assigned and dedicated, so there are milestones to meet as you go through this. So, you’ll get some level of detail and you have a milestone to review it. You’ll get more detail; you’ll have more milestones.
Usually in the process, companies follow step by step guides for getting into the details, but there is usually a lot of room for interpretation along the way, so we’ll talk about some of the structures you can apply along those ways.
In more iterative environments, perhaps more software-type environments, you might actually go through this loop much, much more quickly, so you might do all of this but still focus on a very specific function and do it in a couple of weeks.
The same concepts are still applicable; it all depends on the scope of the product, time frames, interactions, and things like that. What we’re going to talk about here, in terms of systematically defining what to build, applies to almost any kind of product.
So, one of the first steps is defining the overall kind of space that the product team is going to operate in, and that’s typically done with a market definition or market requirement document, or something along those lines.
There are different approaches to doing that. The approach I recommend is defining that solution space with problem statements and constraints, because problem statements are clear descriptions that answer the question, “What is this product supposed to do such that it adds value to the market?”
So, if the product solves a problem that a lot of customers have it should sell well, and if it does it meeting the constraints, this should also result in it selling well. This is kind of the starting point for our product development process.
This might include, in addition to the problems that it’s solving, certain amounts of functionality that are required for the industry. The right amount of performance, functional and non-functional types of requirements, schedules, and things like that. This is really a kind of visual way of thinking of a market requirement set.
Important things here are not over constraining the development team or the design team. If these requirements are so specific that we can only build one product, then there is not a lot of room for the teams to innovate. If the requirements are really vague, the teams don’t know what to build, so we’ll talk through that next.
So the first example—and this is a common scenario that a lot of teams face— is that the solution space or the market requirements are so vague that the design team doesn’t know where to start.
It could be they’re too high-level, say, a problem statement with no constraints, in which case the design team doesn’t know whether the solutions that they can think of are valid solutions to those problems or not.
From the perspective of a designer, the detail that a marketer can provide is usually insufficient. So many questions remain unanswered, that either they go and build something, and it doesn’t end up resulting in a successful product, or they ask a million questions, that the marketer doesn’t have answer to.
The problem is that while the blue space is completely enclosed here, it typically isn’t. Many more detailed requirements are needed to fully enclose the box. Specifying those details completely will also likely dramatically shrink the blue area.
Even though, this approach is clearly problematic, many teams fall victim to it. Designers may feel that they can’t trust marketing because they know they aren’t getting enough detail and often have experienced failed products as a result of it. Marketing doesn’t understand why designers can’t just get on with building the product and why the products are missing the target, late, or both.
It could be there are other factors not considered, and so a lot of times this results in design teams starting to ask lots and lots of questions, which is good. It’s better they ask those questions than don’t, but it does mean that you possibly spend a lot of time iterating in this phase of the project because there’s not enough definition around, well, what problem are we trying to solve and what are some valid constraints around that?
The other possibility is the design team might say, “Hey, we can build whatever we want,” and they proceed without asking questions. It might be brilliant or it might be a complete failure, but because there are no controls for predictability, it’s definitely a high-risk situation.
Another common scenario is that the market requirements are so specific that the design team doesn’t have room to innovate. The market requirements could be a copy of a competitor’s specification with a couple of lines changed to say, “Build me one of these.” Or it could be a previous specification produced by the company with a few modifications that say, “Improve everything by five percent.”
Those kinds of requirements documents tend not to lead to a lot of innovation. It’s okay to have them, because sometimes you need to make a derivative part that’s just a quick improvement to an existing device; this can be very successful in the market.
But if you have too many of those types of products it becomes more difficult as time goes on to react quickly to new requests because you don’t have new technology. By focusing on modifying your existing technology, you’re likely falling behind in the competitive landscape with your customers.
You want to have a good mix of products that are defined in such a way that innovative technology can be developed. Design teams can go solve creative problems using their engineering skills, and that’s really the biggest benefit to the overall organization; it also makes the engineers happier because engineers love solving problems. If you just say, “Build me one of these,” they’re usually far less satisfied and far less enthusiastic about working on a project.
All right; so the third common scenario where this can go wrong is you define what your problem statement is, you have constraints, you think you’ve got a really well defined solution space, and the team goes off and builds something. And along the way the team finds there are challenges in the design, make some changes, and build a product that simply doesn’t meet the original requirements.
Marketing may have provided high-level problems to solve with constraints initially, but the focus moved to agreeing on a specification. As design challenges come up and trade-offs are made, the specification slowly drifts outside of the blue “acceptable products” area.
The result is that the team is so focused on building that they end up not building the right thing.
While this can be addressed by periodically reviewing the specification against the high-level requirements, there are likely many details in the specification that do not clearly trace to any high-level requirements.
As a result, the team doesn’t actually know they are building the wrong product.
When teams say, “Okay, we can make that change,” but don’t have a “live” source of traceability back to the requirements, problems are guaranteed.
This is very common scenario when managing the process in documents and spreadsheets because it’s very difficult to actually have traceability in those kinds of tools.
And what happens as teams go through the discussions and make the compromises, is that they stray from solving the original problem, meeting the original constraints and focusing on the original solution space.
Now, sometimes you get lucky and you can still sell what you’ve built, but what I’ve found in the industry overall is that as time has gone on, things have gotten sufficiently more complicated such that the chances of one these products being successful is decreasing.
It used to be that if I got things wrong, using what I built for another application was possible. That’s rarely case with today’s complex systems.
= = End Part I= =
/media/jama-logo-primary.svg00Marc Oppy/media/jama-logo-primary.svgMarc Oppy2016-09-08 14:54:472023-01-12 16:54:56How to Write Good Requirements, Pt. I of III
“Delivering a release is a little like wrapping up a present and giving it to our customers” – Maarika Krumhansl, Release Manager at Jama Software
When I mention to folks outside of Jama that I’m a Release Manager at Jama, the most common reaction is “Interesting!” and then shortly thereafter “…What does a Release Manager do?”
Release Management means slightly different things at different companies. Some companies employ DevOps Release Engineers instead of Release Managers. Some companies roll the Release Management function into the Product Team. Other companies have their build, test, deploy, documentation, and customer communication so streamlined that they have no need for a Release Manager. I personally come at Release Management from a DevOps background. In a previous job as a Deployment Developer I had the opportunity to build that company’s first Continuous Integration pipeline. I was also responsible for releasing and packaging a Java application for production deployments. I am a huge advocate for Agile methodologies and my Release Management philosophy is heavily based on personal experience as well as learningfromtheindustryleaders.
Regardless of who or what process performs the role of Release Management, it is based on three primary principles: Traceability, Reproducibility, and Measurability.
Traceability: The ability to see how one piece of information – e.g. a requirement, a story, a git commit, an automated test run – connects to any and all other relevant pieces of information in a release, either upstream or downstream in the item hierarchy (or forwards/backwards in the workflow). For example, a release is traceable if any member of the organization is able to see which epics are shipping with a release, the specific stories in those epics, and any bugs or defects slated to be fixed. For each ticket (story or defect) in a release, it is also possible to determine exactly which git commit(s) represents the work done to satisfy the requirements, who performed the code review and the desk review, and whether the automated unit-, integration and functional tests passed against that commit.
Reproducibility: At its core, this is about the ability to generate an exact copy of (i.e. reproduce) a release of Jama. A release is made up of multiple components, including the actual binary artifacts, the deployment method/scripts, the documentation, and the environment configurations. Binary repositories (e.g. Nexus, Artifactory, etc) provide reproducibility of artifacts, and by keeping build and deployment scripts – as well as standard environment configurations – in source control (“Infrastructure as Code“) we can guarantee reproducibility of installs / instances of a release.
Measurability: The ability to determine the “state” of a release at any moment, either in development or in production. While a release is in development, it is important for all stakeholders to have a clear view of the progress being made and the “health” of a release, including things like: How many tickets are still open/in development/in testing? What is the test coverage? What are the results of the automated regression and performance testing and how do the results compare to previous runs? Once a release is live, it is our responsibility to monitor and measure its performance compared to previous releases and to remediate any unexpected behavior (if needed). Numerous tools exist to help with application performance monitoring, server-side resource monitoring, logging and parsing of errors, etc, but these tools are only helpful if 1.) they are measuring the right things, 2.) they have visibility (e.g. alerts/triggers set up, well-designed dashboards, people actually looking at them, etc.) and 3.) they are reliable (e.g. provisioned with enough resources, few numbers of false positives).
It is the Release Manager’s job to ensure the Traceability, Reproducibility and Measurability of software releases. Ideally this is done by implementing tooling and automation but in the worst case some of it must be done manually until the pain of NOT automating the task is far greater than the up-front cost of scripting it. Case in point: Currently at Jama the process of producing a Manifest Check (i.e. the document that proves that each ticket slated for the release has at least one git commit implementing it, as well as verifying that each git commit is implementing a ticket planned for the release) is manual and tedious, involving:
running a bash script to diff the commits in the current release from the last release,
parsing the commit messages for ticket numbers and loading those ticket numbers into a spreadsheet,
cross-checking the tickets in the spreadsheet with the tickets intended for release, as reported by our internal install of Jama,
working with Engineering and Product to resolve any discrepancies by either adding tickets to the release that were overlooked originally, or identifying which commits may have implemented code for multiple tickets.
As you can imagine, this process is time-intensive and non-scalable, since Jama already has multiple code repositories. As we plan to move towards a Service Oriented Architecture (e.g. “microservices“) the number of repos is expected to explode. Clearly the current manual process is no longer tenable. At a recent Jama Hackathon a team of developers and QA engineers developed a proof of concept service that will automate all of the tasks in the above list, and Product has added this work to the overall product backlog (to be prioritized against other strategic initiatives) as an add-on service for Jama.
What I love about Release Management at Jama is the diversity of responsibilities and technical challenges. It is fascinating to witness and assist Jama transform from a monolithic architecture to a service-oriented architecture, and ultimately support a more container-ized, continuous deployment paradigm for our Hosted releases. Additionally, I am learning an enormous amount about the state of the art in on-premises deployment technology – i.e. Replicated and Docker. As a Release Manager concerned with Traceability, I am fortunate to be able to use Jama to build Jama, since this is exactly what Jama was built to do! I work with people from across the organization daily as I perform general project management for releases, and I get to be a spokesperson for process improvements and CI optimization, helping to drive initiatives such as modernizing our binary repo and establishing and enforcing our git release branching strategy. We are also starting to implement slow rollouts of some of our features to small subsets of our customers, also known as “Canary Releases“, and we are pleased with the feedback and data we have been receiving about this effort.
/media/jama-logo-primary.svg00Maarika Krumhansl/media/jama-logo-primary.svgMaarika Krumhansl2016-07-20 08:30:142023-01-12 16:55:04How to Ensure the Traceability, Reproducibility and Measurability of Software Releases
Connecting your requirements to downstream test plans and test cases is crucial to end-to-end traceability. Jama makes it easy to trace the relationships between your requirements and their stakeholders, test cases and test results to ensure that you have full and automated coverage.
As part of our developer community support we’ve just released a script that utilizes our REST API to relate test cases to their test runs in Jama, and makes these relationships visible in coverage explorer and the resulting data available in reporting.
Connect test cases to test runs. *Click to enlarge.*
We’ve made this script available on the Jama Software GitHub account. This simple script is a place to help you get started and we hope you’ll do great things with it! You can also join our open support community to ask questions–or offer ideas!–about anything Jama or product development. You can join in the conversation about our REST API specifically here, where’s theres some great, active topics right now. We invite you to comment in the REST API group if you use this script and improve upon it.
/media/jama-logo-primary.svg00Jama Software/media/jama-logo-primary.svgJama Software2015-12-03 12:56:042023-01-12 16:55:46Just released: Free script to ensure full test coverage
As a System Engineer managing requirements do you ever feel like you’re playing a game of Topple? First, you start with a board that is relatively balanced, but depending on where you put the pieces it can quickly get off kilter. As the game evolves you are adding more and more pieces to the board. Now let’s make the game harder. Some of those pieces weigh more than others, so putting one green piece on the board means adding two red pieces to balance the load. Just for fun, lets now tie a few of those pieces together with some string, meaning you can’t add one or move one piece with out moving another. And are you really playing this game all by yourself?
Managing requirements can feel like a game of Topple.
Finding balance between competing requirements can seem just this precarious. If you’re building a medical device, you are likely weighing human safety over product aesthetics. When you add cost to one area of the product you have to adjust another area to keep cost in balance. And likely you’re working with a team of engineers who are building this product and must stay in close communication with them in order to deliver a complete, quality system. And as your product evolves you’re receiving requirements from many sources: business, product, hardware, and software.
How do you manage all of these competing priorities, conduct effective impact analysis and keep all stakeholders and developers in alignment? You are likely using some sort of complex matrix to keep track of the individual requirements and their relationships. It could be in Excel or even in a legacy RM tool. And this may work if all requirements were created equal, or if you’re the only person who needs to know about the impacts to the complete system.
But likely, that spreadsheet is not working.
Here’s what that spreadsheet on your desktop cannot do:
manage the complex web of traceability to truly understand the relationships between requirements and the people who are responsible for them
quickly find who and what are impacted by changes to the system
ensure that each requirement is validated and verified, proving that when the product is complete, you are delivering what was asked for and that the system has been thoroughly tested
In my work as a Jama consultant, I’ve seen our customers solve these very problems using Jama. Like Sirius XM, who picked Jama for traceability and alignment from requirements to testing. They wanted visibility into change so that they knew what was impacted. And they needed to eliminate the chaos from spreadsheets and emails.
Our partner, Deloitte, first implemented traceability with Jama to get visible coverage from requirements through test. Then, they connected their many stakeholders to the requirements those people owned, and, as questions came up throughout development, the right people could be pulled into conversation, within the Jama application, to get to a decision quickly. These changes were captured along with the discussion right in Jama so there was a history of decisions that linked back to the original requirements requests.
One of the things I often hear in my work is a belief that implementing a new system will only increase the complexity of an already difficult-to-manage process. I understand the concern, and I’ve written before about how to ensure adoption of a new enterprise application. One thing that makes it easy for teams to adopt Jama is its ease-of-use, especially when you compare it to the chaos of documents and email and file sharing applications. In our next post, Matt Mickle, another Jama consultant will discuss the characteristics in the Jama application that make it easy to transition from document-based traceability to visible coverage in a collaborative system.
/media/jama-logo-primary.svg00Jama Software/media/jama-logo-primary.svgJama Software2015-09-09 11:19:212023-01-12 16:55:48Managing Competing Requirements in Jama
Open just about any business management book or blog and the topic of accountability—and the eternal quest for it—will turn up. But when your world revolves around managing the creation, iteration and release of new products, traceability much more accurately defines what you seek.
The frustrating fact is that, for most product managers, trying to implement traceability that occurs concurrent with build processes is like trying to wish a unicorn into existence. It’s something you aspire to see, but repeated trial and error suggests it might remain a figment of your imagination.
As Jama product manager Derwyn Harris likes to say, traceability is the process of connecting data, people and work. It sounds simple but the challenge is that traceability is too often treated as a kind of checklist.
To add real, measurable value to your team’s product delivery process, traceability needs to show you how every item, action and actionable item connects to each person working on it; it needs to illustrate how your people are connected to each step of the process.
Of course, decisions drive the actions in each product delivery cycle. As taken from our webinar, Evolve Your Definition of Traceability, below is a partial, simplified outline of the decision questions traceability needs to answer for each stakeholder and team member, from the point of original concept through the stages of define, build, test and launch:
Decision needed:
Whom do I need to ask?
What’s the best way to communicate with decision makers?
Do I have all the necessary context to understand the reason for this decision, or the problems associated with it?
Decision in progress:
How will this tie-in with and affect what we’ve already agreed to?
Can we make this discussion transparent so we can react in real time?
How can we determine what the impact of making this decision will have?
Decision made:
When the next iteration requires more decisions, how can we track them?
What’s the best way to notify stakeholders and key team members about changes that are relevant to them?
Our product’s history is in millions of critical details; how do we provide context and rationale for each choice made?
When your teams work in different time zones, on different product-related projects or in siloed areas of expertise, static methods of tracking data for impact analysis and coverage fall short. Product managers need a live environment for real-time collaboration that tracks relationships between people and data—whether you’re building software, hardware or a combination of the two.To see a demo of how traceability works with Jama, grab a coffee, tea or a snack and check out our webinar.
Today, every product launch involves many people and thousands of decisions. Apply real-time traceability to them, achieve product delivery accountability and stop chasing after the unicorn.
/media/jama-logo-primary.svg00Marc Oppy/media/jama-logo-primary.svgMarc Oppy2015-01-29 12:05:592023-01-12 16:55:54The Product Manager’s Quest: A Unicorn Named Traceability