Tag Archive for: traceability

 

traceable agile development

Traceable Agile – Speed AND Quality Are Possible for Software Factories in Safety-critical Industries

Automotive, aerospace and defense, and industrial companies have largely adopted Agile within rapidly growing software factories to speed time to market in order to stay competitive. These software factories have largely succeeded in speeding up software development for companies within the industries that have adopted it, but maintaining quality is still a key concern. The inability to coordinate development across engineering disciplines has led to product recalls, quality complaints, and has created significant internal challenges to satisfy functional safety requirements from regulators and confidently deliver high-quality software. These challenges — and resulting outcomes — are often so severe that leadership of the software factories have been let go.

Fundamental Questions We Hear

When we ask software factory leaders, “what keeps them up at night?” We consistently hear the following five questions:

  • How do I know which product requirements have been missed?
  • How do I know which product requirements are not fully covered by test cases?
  • How do I know which product requirements have failed to pass tests?
  • How do I identify rogue development activity?
  • How do I know if changes have been made at the system and / or hardware level that impact the software team?

These are fundamental questions that should be answerable from leading Agile tooling, but they are not. The reason is that Agile tools focus on tasks (define, assign, status, complete, delete) and have no notion of the current and historical state of the project. Tasks are not tied to any state of the project which often leads to drift from the actual needs and requirements of your customer or end user. As a result, these questions are not answerable with Agile tools like Jira and Azure DevOps. Project management tools like Jira Align answer important questions around staffing, sprint planning, and cost allocation, but do not address the critical questions above focused on the real-time state of the software development effort against the approved requirements.


RELATED: What is a Scaled Agile Framework (SAFe) and How Does it Help with Complex Software Development?


The Answer? Traceable Agile.

How do you best speed software and overall product development and still achieve the quality expectations of customers and company leadership? The answer is Traceable Agile. Traceable Agile speeds the FLOW of software development but also maintains the current and historical STATE of the development effort and auto-detects issues early in the software development process. Traceable Agile recognizes that developer activity is best managed as a FLOW using tasks in a tool such as Jira. What is needed to achieve Traceable Agile is to pair a system with Jira that manages the STATE of the development effort at all times. By keeping STATE and FLOW tools separate but integrated, no change is required to software developer process and tools. This is significant. Software leadership can now answer their critical questions without having to undergo a major process and tool change with resistant developers which would slow down development and/or increase staff attrition.


RELATED: How to Achieve Live Traceability™ with Jira® for Software Development Teams


So how does Traceable Agile work in practice?

Here is an overview and diagram of Jama Connect® maintaining the STATE of development activity and Jira providing the FLOW.

  1. Task activity continues as normal in Jira and risk is auto-detected in Jama Connect by comparing all user stories and bugs in Jira to the expected development and test activity for each requirement in Jama Connect.
  2. All exceptions are identified —the ones that answer the questions that keep software factory leadership up at night — such as requirements with no user stories, user stories with no requirements, requirements with no test cases or test results, etc.
  3. After the exceptions are inspected in Jama Connect, management can take action and assign corrective tasks in Jira as just another task in the queue for a developer.

 

traceable agile software development

 


RELATED: Extending Live Traceability™ to Product Lifecycle Management (PLM) with Jama Connect®


This is a fully automated process that leverages automated synchronization of meta data between Jira and Jama Connect via Jama Connect Interchange™. The only metadata that needs to be synchronized from Jira to make Traceable Agile possible is as follows: ID, Created Date, Creator (User), Modified Date, Modifier (User), Title, Status, Link (URL), Relationships. On inspection in Jama Connect of an issue, one simply clicks on the link to go to Jira if more information is required to diagnose.

Many of our leading clients have already implemented Traceable Agile and are significantly improving their Traceability Score™ which we have demonstrated leads to superior performance on quality metrics in our Traceability Benchmark Report.

Feel free to reach out to me to learn more and I will respond.


RELATED: In this video, we will demonstrate and discuss Traceable Agile™
and how speed and quality make software factories and safety-critical industries possible.



Image showing currency, meant to portray the importance of investing in a Requirements Management and Traceability Solutions as a wise financial choice.

A Wise Investment: Requirements Management and Traceability Solutions During an Economic Downturn

In the realm of business, the economy is a dynamic force that ebbs and flows, much like the tide. Economic downturns, while challenging and sometimes scary, can also present unique opportunities for businesses to reevaluate their strategies, streamline their operations, and invest wisely for future growth. One such investment — that might not be immediately obvious but holds immense potential — is in requirements management and traceability solutions. In this blog post, we’ll explore why it makes sense to invest in these solutions during an economic downturn.

1. Enhanced Efficiency and Resource Optimization:

In times of economic uncertainty, operational efficiency becomes paramount. Requirements management and traceability solutions provide a structured framework for capturing, organizing, and tracking project requirements throughout their lifecycle. By optimizing requirements management processes, businesses can ensure that resources are allocated to the most critical aspects of a project. This reduces the risk of scope creep, minimizes wasted effort, and enhances overall project efficiency. With a clear understanding of project goals and dependencies, teams can work cohesively, to not only avoid both unnecessary and costly duplication of work but also enable organizations to allocate resources where they are most needed.


RELATED: Buyer’s Guide: Selecting a Requirements Management and Traceability Solution for Software Development


2. Risk Mitigation:

Economic downturns often come with increased financial constraints, so allocating resources to any new software investments might seem counterintuitive. But investing in requirements management and traceability solutions can truly act as a risk mitigation strategy. The right requirements management and traceability solutions facilitate comprehensive end-to-end impact analysis, allowing businesses to understand how changes to requirements can affect other aspects of the project or organization. By foreseeing any potential pitfalls and addressing them proactively, companies can increase process efficiency, minimize costly errors, rework, and recalls, and streamline development to accelerate time to market — ultimately safeguarding their investments in both time and resources.

3. Regulatory Compliance and Quality Assurance:

In certain industries, compliance with regulatory standards is non-negotiable. Implementing robust requirements management and traceability solutions can streamline the process of documenting and demonstrating compliance. These solutions enable clear documentation of how each requirement maps to relevant regulations, making audits smoother and reducing the risk of non-compliance penalties. Moreover, well-managed requirements also lead to improved quality assurance practices, ensuring that products or services meet the desired standards even during challenging economic periods.

4. Agility and Adaptability:

Economic downturns often require businesses to pivot their strategies quickly to address changing market dynamics. Requirements management and traceability solutions provide a foundation for agile decision-making. When requirements are well-documented and linked, it becomes easier to assess the impact of changes, make informed decisions, and adapt to shifting priorities without causing disruptions. This agility allows businesses to seize new opportunities and respond to market demands more effectively.


RELATED: Requirements Traceability Diagnostic


5. Long-Term Cost Savings:

While the initial investment in requirements management and traceability solutions might seem significant, it pales in comparison to the potential long-term cost savings. When requirements are managed efficiently, projects are less likely to overrun budgets or experience delays due to misunderstandings or miscommunications. The cost of fixing issues after they’ve occurred is far higher than preventing them in the first place. By investing in proper requirements management, businesses can avoid the financial strains that arise from project failures or inefficiencies.

Conclusion:

In the face of economic uncertainty, investing in requirements management and traceability solutions might not be the most obvious choice, but it’s certainly a strategic one. These solutions offer a structured approach to managing projects, reducing risks, enhancing efficiency, ensuring compliance, and promoting adaptability. By making this investment, businesses position themselves for not only surviving economic downturns but also thriving in the long run. As the tide of the economy inevitably turns, those who have laid a strong foundation in requirements management will be better equipped to ride the waves of change.

Download the complete eBook to access simple, interactive ROI calculators and learn the financial benefits of investing in a requirements management solution during an economic downturn >>
Why Investing in Requirements Management During an Economic Downturn Makes Good Business Sense



Image showing V Model for Validation and Verification

Best Practices for Verification and Validation in Product Development

In the competitive landscape of modern product development, ensuring the reliability and quality of the product is essential to meet customer – and stakeholder – expectations and regulatory requirements. Verification and validation (V&V) are two crucial processes that play a pivotal role in achieving these goals. V&V are systematic methods that assess a product’s adherence to specifications and its ability to perform as intended. In this article, we will delve into the best practices for verification and validation in product development, exploring the key steps, methodologies, and benefits of each process.

Understanding Verification & Validation

Before delving into the best practices, it is essential to clarify the distinction between verification and validation. Verification focuses on assessing whether a product meets its design specifications, ensuring that each component and feature works as intended. On the other hand, validation is concerned with evaluating whether the product fulfills its intended use and customer needs. In essence, verification confirms if the product is designed correctly, while validation confirms if it is the right product for the intended application.


RELATED: Five Key Design Control Practices that Improve Compliance and Help Develop Better Products


Incorporating V&V Early in the Development Lifecycle

To maximize the effectiveness of verification and validation, these processes must be integrated into the product development lifecycle from its early stages. By starting V&V activities early, potential issues can be identified and resolved before they escalate, reducing costs and time-to-market. Early involvement also allows for feedback to be incorporated into the design, leading to a more robust and reliable final product.

V Model image showing Verification and Validation in the Product Development Process

Clearly Defined Requirements

Well-defined requirements are the foundation of successful verification and validation. During the requirements gathering phase, it is vital to engage stakeholders and subject matter experts to create clear, measurable, and unambiguous specifications. These requirements serve as the baseline against which the product will be verified and validated. Proper documentation and version control are critical to ensure that changes to requirements are tracked effectively. Additionally, the later in the development process that requirements get changed, many times because they weren’t written well the first time, the more costly it is due to downstream impacts such as rework in verification and validation.


RELATED: Plutora: Verification vs Validation: Do You know the Difference?


Utilizing Various V&V Techniques

Product development teams should employ a mix of V&V techniques to comprehensively assess the product’s quality. Some commonly used methods include:

  • Testing: Conduct thorough testing, including unit testing, integration testing, system testing, and user acceptance testing, to verify that each component and the product as a whole performs as expected.
  • Simulation: Use computer simulations to evaluate the product’s behavior in various scenarios, particularly for complex systems or when physical testing is impractical or cost prohibitive.
  • Prototyping: Building prototypes early in the development process allows for real-world testing, uncovering potential design flaws and usability issues.
  • Peer Reviews: Encourage regular peer reviews of design documents, code, and other artifacts to catch errors and improve the overall quality of the product.
  • Model-based Design: Utilize model-based design approaches, such as Model-Driven Architecture (MDA), to create detailed models that can be verified before implementation.

Risk-Based Approach

Incorporate a risk-based approach into V&V activities to focus resources on critical areas. Identify potential risks associated with product failure and prioritize verification and validation efforts accordingly. This approach ensures that resources are allocated efficiently, concentrating on areas with the most significant impact on product performance and safety.

Independent Verification and Validation (IV&V)

Consider engaging external experts or teams for independent verification and validation. External parties can provide an unbiased assessment of the product, uncovering issues that internal teams might overlook due to familiarity or assumptions. Independent verification and validation bring additional expertise and ensure a higher level of confidence in the product’s quality.


RELATED: How to Achieve Higher Levels of the Capability Maturity Model Integration (CMMI)


Continuous Integration and Continuous Delivery (CI/CD)

Implementing CI/CD practices allows for continuous verification and validation throughout the development process. Automated testing and deployment pipelines can quickly detect regressions and integration issues, ensuring that the product remains stable and reliable throughout its evolution.

Documenting V&V Activities

Comprehensive documentation of all verification and validation activities is essential for compliance, knowledge retention, and continuous improvement. Properly documented V&V processes help maintain a historical record of changes, failures, and resolutions, facilitating future product iterations and troubleshooting.

V & V are integral to successful product development, ensuring that products meet the required specifications and perform as intended. By adopting best practices such as early integration, clear requirements, a mix of v&v techniques, risk-based approaches, and continuous verification, companies can create high-quality, reliable products that customers love and gain a competitive edge in the market. Moreover, investing in verification and validation from the outset of development can save time and resources, prevent costly delays, and lead to higher customer satisfaction and loyalty in the long run.



In this blog, we recap our webinar, “Manage by Exception: Data-driven Practices to Improve Product Quality”. Click HERE to watch the entire webinar.


Curious how data-driven practices unlock successful product delivery?

Our recent webinar explores the transformative approach of managing by exception in reducing product failure risk. In this session, we walk through why managing by data is crucial, how data “exceptions” uncover gaps, and real-life examples in product development.

During this informative session, Preston Mitchell, VP, Global Solutions at Jama Software®, offers insights on how Jama Connect® helps teams proactively prevent gaps in requirement quality and traceability to streamline their product delivery process.

Check out this webinar to learn:

  • Why data-based management is important
  • The definition of a data “exception” and how it uncovers gaps
  • Examples of “exceptions” in daily product development and requirements management
  • How Jama Connect’s unique features, such as Advanced Filters and Dashboards, can help your team manage by exception
  • How to proactively prevent exceptions using Jama Connect Advisor™ and Live Traceability
  • Discover how Jama Connect can help your team manage by exception and navigate product development with precision.

Below is an abbreviated transcript of our webinar.


Manage by Exception: Data-driven Practices to Improve Product Quality

Preston Mitchell: Hello everyone and thank you for joining today. My name is Preston.. I’m the VP of our solutions department at Jama Software and I lead our rockstar team that delivers solutions and services for all of Jama Software’s customers. I’ve been with Jama Software for over 10 years and held several positions within the company and over the course of my time here, through hundreds of client engagements to onboard and deploy Jama Connect, I have learned a lot from our customers and our customers really are our inspiration. They’re building next-generation products like self-driving cars, life-saving medical devices, futuristic robots, and the thread that ties all of these customers together is the central theme of how we can make better decisions to improve the success rate of our R&D function or our product development function. So I’m really excited to talk to you all today about the theme of managing through data to do just that. How can we bring measurable improvement to your process?

So for the agenda today, we’re going to talk about the power of data, how Jama Software empowers our customers to use data and exception management, and some key measurements that we prioritize such as requirements quality and the traceability score. And then finally we’ll close out with how you can plan for success in this and just some Q&A from the audience. So we’ll have my colleagues helping out with the chat. Juliet’s going to share some of the questions, so don’t hesitate to use the chat to ask questions.


RELATED: How to Develop IoT Products with Security in Mind


Mitchell: All right, so it should be obvious to most, but managing through data brings several benefits to your organization. Software is a part of our day-to-day work and it’s enabled an exponential increase in collaboration and visibility. And increasing visibility to that critical data and the workflows allows teams to have a more shared understanding of the goals, the problems, and the action items that all go into making successful products. And rolling up this data allows the R&D and product development leaders to have more real-time metrics and make better business decisions. So when you start to manage through data, this increased visibility really encourages process improvement and also really professional growth. But at the same time, there’s a challenge that comes along with this. This increase in the amount of data that is available often is overwhelming given that the time that you have in a day is really a fixed resource.

We want to make this a little bit interactive. I’d be curious to hear from the audience, how do you use or maybe how do you not use data today in your decision-making with regards to developing new products? So Juliet, why don’t we pull up our first poll? What’s the primary method that your organization uses for major decisions in the development of the products and systems that you build? So we’ll give folks about 20 seconds to answer this.

Okay, and I see some interesting results coming in here so far. Well, I know it’s hard to pick just one primary as the reality is there are likely multiple of these here for really large decisions. I was wondering how many folks would pick the first and the last option. Intuition or just plain not sure. So let’s move forward here.

I have linked a very interesting Harvard Business Review article called Don’t Trust Your Gut. But if I were to summarize it, intuition is often glorified quite a bit in the business world and especially when people are wildly successful. So for example, if you make a big business or a personal bet that pays off, these are often celebrated. But in business, we hear a lot about failures too, and they’re often blamed on things like poor timing, and poor market fit, maybe a lot of it is poor execution, but one adjacent failure symptom is the lack of an alarm to trigger a change. So we often hear the old adage, that it’s better to fail fast than early so you have a chance to course correct.


RELATED: Reduce Project Risk in the Product Development Process


Mitchell: With the right data and the right alarm triggers, this is possible and for the customers that Jama Software works with, you have smart engineers, product managers, and business analysts, oftentimes biased and emotional. They can play a real role in making bad decisions that eventually lead to some sort of R&D or product development failure. And when your engineering leaders or even yourself don’t have the data on execution progress when your teams are not actually tracing requirements to the why or the need for customer validation. And when we don’t have insight into things like verification coverage, and all that missing data, you’re going to find that we encounter these problems way too late in the development cycle.

And we see this very often in the news, these failures that happen too late. Investigations happen, and recommendations are made, but how can we make data available to the right people so that we can prevent these issues from ever occurring in the first place? That’s what we’re going to talk more about today. And as the famed management guru, Peter Drucker said, “If you can’t measure it, you can’t improve it.” So being able to use data to measure allows your teams to see recurring patterns or anomalies and then individuals can then take care of these before they become a larger problem. Or better yet, how can we create preventative measures and automation to improve the process overall?

So that leads us to the key principle that we’re going to talk about today. Management by exception. So management by exception is a methodology that’s really meant to empower your team with the data around early warning indicators so that you can make smarter and faster decisions. It also allows leadership to focus their time on the exceptions and not micromanaging or intervening with the teams if the majority of the engineering data shows that the product development is going as expected, and I really want to reemphasize that because it’s not meant to micromanage. In fact, it should lessen that. A common hurdle that teams face when you introduce a change where you’re transforming the organization by managing through data is resistance.

To watch the entire webinar, visit:

Manage by Exception: Data-driven Practices to Improve Product Quality



CMMI Blog Part 2In part two of this two-part blog series, we continue the overview of our recent whitepaper, “How to Achieve Higher Levels of the Capability Maturity Model Integration (CMMI) with Live Traceability™” Click HERE for part one of this blog and HERE to read the entire whitepaper.


How to Achieve Higher Levels of the Capability Maturity Model Integration (CMMI): Part 2

Benefits of Live Traceability™

The main benefits of Live Traceability across best-of-breed tools are as follows:

  • Reduce the risk of delays, cost overruns, rework, defects, and recalls with early detection of issues through exception management and save 40 to 110 times the cost of issues identified late in the process.
  • Achieve CMMI Level 2 maturity for Requirements Management with no after-the-fact manual effort.
  • Eliminate disruption to engineering teams that continue working in their chosen best-of-breed tools with no need to change tools, fields, values or processes.
  • Increase productivity and satisfaction of engineers with the confidence that they are always working on the latest version, reflective of all changes and comments.

Another core goal of CMMI Level 2 is to involve stakeholders in the requirement review and approval process (see table below). Let’s examine how companies achieve this goal either through meetings or online reviews.

CMMI Level 2 (Managed) Requirements Management

CMMI Chart

There are two ways to implement this practice: meetings or online reviews. Most engineering organizations still address stakeholder approvals through large and lengthy meetings that involve all relevant engineering disciplines scrolling through the requirements document for feedback. This is a highly inefficient approach that negatively impacts engineering productivity, morale and fails to capture relevant comments, feedback, revisions, and approvals from stakeholders given the format. More mature engineering organizations have brought the review and approval process online to improve the quality and timeliness of feedback, capture all version and approval histories online, and improve engineer productivity and morale. Let’s examine how companies have brought reviews online with Jama Connect® Review Center.

Review Center allows teams to send product requirements for review, define what’s required, invite relevant stakeholders to participate, collaborate, and iterate on resolving issues and approving agreed-upon requirements. By simplifying the revision and approval process, Review Center streamlines reviews and facilitates collaboration, giving stakeholders easy access to provide feedback where required. Jama Connect enables both informal and formal online review processes to support this CMMI best practice.


RELATED: Extending Live Traceability™ to Product Lifecycle Management (PLM) with Jama Connect®


Formal Reviews

The formal review process enabled by Review Center is shown below:

Formal Review Center Chart

Review Center enables teams to define a review, invite participants, gather and incorporate feedback from relevant project stakeholders, iterate, track a review’s overall progress, and monitor progress and capture approval signatures if required. Reviewers can respond to a conversation that’s taking place, as well as mark items as “Approved” or “Rejected” to complete the review. Inside Review Center, reviewers can also add electronic signatures to reviews in order to comply with regulatory standards. Jama Connect captures the date and time of completed reviews for auditing, tying each signature to the document under review.

Informal Reviews

Organizations that still want the quality review aspects of Jama Connect but are not bound by producing formal documents of requirements may take a more iterative approach. A “rolling” review is a review that changes the scope of which requirements are included in each revision. For example, each requirement has a “state” field – Draft, Ready for Review, or Approved. In the project side of Jama Connect, requirement owners will mark requirements they feel are “Ready for Review.” Moderators can also edit requirements directly in the review based on feedback from Approvers. Using a Jama Connect Advanced Filter, a review will be started by pulling in only requirements that are marked “Ready for Review.” Using this methodology, the review is much smaller in scope and can typically be completed faster. On a regular cadence, the moderator will review feedback, make changes to requirements as necessary, or potentially update the requirement status to “Approved” if the required stakeholders have approved the requirement. When publishing a new revision, Jama Connect will pull new requirements into the review and cycle out requirements that are “Approved” (these requirements no longer meet the filter criteria of state = “Ready for Review”). This allows teams to review requirements on a regular cadence — or sprint — and cycle requirements into the review when they are ready for feedback and out of the review when they are “Approved.” Almost any item of content you create in Jama Connect may be sent for a review, including requirements, design, test cases, test plans, and test cycle results.


RELATED: Tracing Your Way to Success: The Crucial Role of Traceability in Modern Product and Systems Development


“Review Center is facilitating communication. It has ensured a shared view of the world and agreement from all stakeholders. There are no surprises anymore. Jama Connect enables us to review documents and make decisions easily with everyone coming to a shared conclusion. If we compare it to reviewing the spreadsheets and Word documents versus doing a review in Jama Connect Review Center, it’s about an 80% reduction in time, for sure.” – Craig Grocott, Head of Systems Engineering

To achieve CMMI Level 2 requires defining a development process and adhering to it. Below is a core goal for CMMI Level 2 – evaluate adherence to requirements management process.

CMMI Level 2 (Managed) Requirements Management

CMMI Table

Achieving this goal requires the ability to decompose requirements across engineering disciplines and maintain traceability up and downstream as the project progresses with significant changes and rework. Without an underlying system architecture and common data model, this goal becomes unattainable for most organizations. Attempts to manage through Word and Excel, become unwieldy and unable to meet the requirements for Live Traceability, leading to defects, delays, cost-overruns, and recalls. Below, you can see how easy it is to manage traceability and view up and downstream multiple levels in a trace view of requirements in Jama Connect. Jama Connect’s Traceability Model defines the data model across best-of-breed tools to capture actual behavior for traceability and management by exception.

Trace Vie

To achieve CMMI Level 3 requires defining a development process and adhering to it. Below is a core goal for CMMI Level 3 – establishing a verification process and adhering to it.

CMMI Level 3 (Defined) Verification

CMMI Level 3

Companies are achieving this goal through Jama Connect by establishing a Traceability Model that requires test verification for requirements and managing by exception through dashboard reporting to ensure verification happens across all requirements. Below is a sample verification dashboard to achieve this goal with customer-specific info redacted. Here you can see how the Verification Leader manages their function through exception management. Specific widgets on the dashboard track requirements without tests, failed tests, tests without requirements linked to verify, bugs without tests, and risks without upstream or downstream traceability. The Traceability Model established in Jama Connect defines the expected behavior against which all activity can be compared to generate exceptions that can be managed through the dashboard. Without this system architecture and data model, managing by exception becomes extremely manual and productivity killing, if not impossible.

CMMI Level 4 requires organizations to have developed predictive scores and benchmarks that enable management to identify product development risk early and remediate at much lower cost than if not identified until late in the development process or after product release into the market. The table below shows the definition of this core, Level 4 goal.

CMMI Level 4 (Quantitatively Managed) Process Performance

CMMI Level 4

Leading companies are achieving this goal by applying Jama Software’s Traceability Score™ and benchmarking engineering projects internally and externally against peer companies. Jama Software is the first to measure traceability thanks to our clients’ participation in a benchmarking dataset of over 40,000 complex product development projects spanning aerospace, automotive, consumer electronics, industrial, medical device, semiconductor, space systems, and more. All of this is made possible by our core product, Jama Connect®, which enables the largest community of engineers using requirements management SaaS (Software as a Service) in the world.

To formally measure traceability, we have established the Traceability Score. The Traceability Score measures the level of actual process adherence to the expected traceability model and can be used to compare performance across projects, teams, divisions, and companies. This score can also determine impacts to schedule, budget, cycle times, risk, and quality.


RELATED: New Research Findings: The Impact of Live Traceability™ on the Digital Thread


Traceability Score definition

Traceability Score = # of established relationships among model elements as specified by the project’s traceability model.

The following diagram provides an illustration for the buildup of the calculation:

  1. At the individual requirement level, we can identify each expected relationship defined in a project’s traceability model (i.e., user needs defined by requirements, further refined by sub requirements, and test cases that should verify the requirement, etc.). We can then identify how many of these relationships have been established to get an individual requirement’s traceability.
  2. As we go one level higher and measure traceability within a particular element type (e.g., user needs, requirements, tests, etc.) we can sum up the number of expected and established relationships across the set of items, giving us traceability at the element type level.
  3. Finally, we can sum up the number of expected and established relationships across all element types, giving us the project’s total Traceability.
Chart showing three levels of traceability

Correlations & Hypothesis Test Results

As a process management tool, the value of a Traceability Score is to quantify actual adherence to the specified approach. To determine best practices from the data, statistical tests were run to understand how differing levels of project adherence to Live Traceability can impact desired outcomes. As we have shown, the Traceability Score measures actual adherence to the defined traceability model. The systems engineering discipline, the V model, quality engineering, and more – all rely on the intuition that this approach will yield better results. Anecdotal evidence abounds to support this intuition, but the dataset has been lacking to conduct statistical tests to test this hypothesis. Using our dataset, we were able to determine that Traceability Scores exhibit statistically significant correlations to the following outcomes and rejected the null hypothesis that these correlations were purely random.

1. Faster time to market

The first three tests focus on how Traceability Scores impact cycle time. Do higher Traceability Scores lead to faster test case execution and defect identification? This is a fundamental value asserted by systems engineering and the V-Model – that earlier detection of defects leads to fewer delays and much lower cost to correct. We measured the following times below and noted performance improvements in top versus bottom performers of 2.1X to 5.3X. Higher Traceability scores were found to lead to faster test case execution and defect detection having passed both of our statistical tests.

  1. Median Time to Execute Test Cases (2.6X faster)
  2. Median Time from Test Start to Defect Detection (5.3X faster)
  3. Median Time to Identify the Set of Defects (2.1X faster)

2. Higher quality

The last three tests focus on how Traceability Scores impact quality. Do higher Traceability Scores lead to a higher quality product? This is a fundamental value asserted by systems engineering and the V-Model – that a commitment to test case creation and execution leads to a higher degree of requirement verification and product quality. We measured the following aspects of testing and verification below and noted performance improvements in top versus bottom performers of 1.9X to 2.9X. Higher Traceability scores, having passed both of our statistical tests, led to more tests being completed and a higher percentage of passed tests.

  1. Percent of Requirements with Verification Coverage (1.9X higher)
  2. Percent of Requirements Verified (2.1X higher)
  3. Initial Test Case Failure Rate (2.4X lower)
  4. Final Test Case Failure Rate (2.9X lower)

Conclusion

The CMMI defines its best practices in terms of goals, practices, and artifacts. The CMMI does not address the underlying systems and data architecture required to enable these practices, deliver these artifacts, and achieve these goals. The systems architecture reality for most engineering organizations is highly fragmented with the necessary data to manage the engineering product and process (user needs, system level requirements, approvals, component level requirements, model designs, component requirement decompositions, interface definitions, test cases, test results, risk analysis, validations, traceability analysis, etc.) spread across hundreds of siloed tools, spreadsheets, emails, and chat tools with high degrees of uncertainty that any information reflects the latest version continually updated with all interdependencies.

As we have shown, it is extremely challenging if not impossible to move up the CMMI maturity model without addressing the underlying systems architecture and data model. Carnegie Mellon has chosen to use our software to train their students and leading companies have deployed Jama Connect in the ways noted above to achieve their CMMI objectives.

For those interested in exploring this topic further, we encourage you to reach out and have a conversation with one of our experts

Sources:
https://www.cmmi.co.uk/cmmi/cmmi.html
https://resources.jamasoftware.com/whitepaper/requirements-traceability-benchmark
This has been part two of a two-part blog series overviewing our recent whitepaper, “How to Achieve Higher Levels of the Capability Maturity Model Integration (CMMI) with Live Traceability™” Click HERE to read the entire thing.


CMMIIn part one of this two-part blog series, we provide an overview of our recent whitepaper, “How to Achieve Higher Levels of the Capability Maturity Model Integration (CMMI) with Live Traceability™” Click HERE to read the entire thing.


How to Achieve Higher Levels of the Capability Maturity Model Integration (CMMI): Part 1

The Capability Maturity Model Integration (CMMI), developed at Carnegie Mellon University’s Software Engineering Institute, is a recognized standard for engineering best practices that reduce the risk of defects, delays, cost overruns, and recalls. Organizations that choose to adopt CMMI strive to progress up the five levels in the maturity model by implementing sequentially more advanced best practices spanning the engineering development process.

Jama Software® is honored to be chosen by Carnegie Mellon as the primary tool used to in its Master of Science in Software Engineering to train the next generation of software engineering leaders in best practices for requirements management, reviews, verification, validation, and process performance management.

The CMMI defines its best practices in terms of goals, practices, and artifacts. The CMMI does not address the underlying systems and data architecture required to enable these practices, deliver these artifacts, and achieve these goals. The systems architecture reality for most engineering organizations is highly fragmented with the necessary data to manage the engineering product and process (user needs, system level requirements, approvals, component level requirements, model designs, component requirement decompositions, interface definitions, test cases, test results, risk analysis, validations, traceability analysis, etc.) spread across hundreds of siloed tools, spreadsheets, emails, and chat tools with high degrees of uncertainty that any information reflects the latest version continually updated with all interdependencies.

The main reason for this landscape of siloed tools is that each engineering discipline is empowered to choose a best-of-breed tool to optimize engineer productivity within their team. The breadth of functionality covered in total by all of these tools — spanning all engineering disciplines — precludes the potential for a single software vendor to provide one software tool which could replace all these best-of-breed tools to the satisfaction of every engineer across disciplines. Generally speaking, each engineering field uses their chosen best-in-class technology to accomplish their objectives. That said, the data needed to achieve CMMI goals, practices, and artifacts is unstructured, unrelated, unconnected, and unmeasurable, which poses a serious challenge when it comes to achieving goals, practices, and artifacts that must span multiple disciplines to control, manage, and improve the engineering process. In order to advance along the maturity model, each engineering organization (regardless of size) needs a unified data model architecture and automated synchronization spanning best-of-breed tools. Without these improvements, most engineering organizations struggle to achieve Level 2 (Managed) and can only do so in a highly manual, after-the-fact manner that generally fails to deliver the desired outcome benefits.

Let’s take a look at a few specific examples from CMMI to demonstrate the need for a unifying data model and an overview of how to achieve it. The first one we will examine is a core practice from the Requirements Management section for Level 2 (Managed) that specifies bidirectional traceability from high level requirements through decomposed requirements and work products across engineering disciplines to generate and maintain a traceability matrix.

CMMI Level 2 (Managed) Requirements Management

CMMI Level 2 (Managed) Requirements Management


RELATED: Tracing Your Way to Success: The Crucial Role of Traceability in Modern Product and Systems Development


There are two ways companies can approach achieving this traceability practice: after-the-fact traceability or Live Traceability™.

  • After-the-fact traceability occurs after the product has been developed and is typically a highly manual effort to try and re-create artifacts to demonstrate traceability that should have occurred during the development process but did not. This effort is undertaken solely to comply with industry standards and satisfy auditor requests for demonstration of process maturity.
  • Live Traceability occurs in real time as the product development process progresses to improve overall productivity (by ensuring engineers across disciplines are always working off the most recent and correct versions) and to reduce the risk of negative product outcomes (delays, defects, rework, cost overruns, recalls, etc.) through early detection of issues. The benefits of early detection of issues are significant. Research by INCOSE found that issues not found until verification and validation are 40 to 110 times more costly than if found during design. For this reason, most companies want Live Traceability but are stuck with legacy tools and spreadsheets that do not support it. Since each engineering discipline is allowed to choose its own tooling, the result is a large number of tools with no relationship rules or mechanisms to create Live Traceability across them.

So how do you achieve Live Traceability?

STEP 1: Define a Traceability Model

Live Traceability requires a model of the key process elements and their relationship rules to monitor during the development process. Below you see a sample relationship rule diagram from Jama Connect® that defines a common data model that spans best-of-breed tools which enables engineering organizations to manage traceability in real-time and improve process performance. Relationship rules vary by industry and company-specific requirements. Best practice templates are provided to comply with industry standards and configured to meet client-specific needs. The definition of a traceability model forms the foundation for model-based systems engineering (MBSE) since it defines model elements and their relationship to each other in a consistent manner across the entire system architecture.

 

Step 2: Setup Continuous Sync for Siloed Tools/Spreadsheets

Once the relationship rules are defined, the next step is to set up continuous sync with best-of-breed tools and spreadsheets used by the various engineering disciplines. The traceability diagram below shows a typical example of best-of-breed tools and where they sync in the Jama Connect relationship model to deliver Live Traceability.

CMMI Relationship JIRA chart

Most companies prioritize the areas of the traceability model that are most prone to lead to costly issues in the absence of a continuous sync. Most commonly, these areas are:

  • Software task management – directly linking the decomposition of requirements into user stories enables Live Traceability through the software development process through testing and defect management.
  • Test automation – test cases are managed in Jama Connect to align to requirements and ensure traceability across all engineering disciplines with the test automation results sync’d to the traceability model at the verification step.
  • Risk analysis (DFMEA/FMEA) – is most often conducted in multiple Microsoft Excel spreadsheets and the assumption has been that Live Traceability was not possible with Excel. Jama Connect is the first requirements management solution to enable Live Traceability with Excel functions and spreadsheets. Risk teams can now work in their preferred spreadsheets AND for the first time achieve live traceability to stay in sync with changes made by any engineering team.
  • Model-based systems engineering (MBSE) – the first step in MBSE is to define a relationship model between all product requirements. Once a relationship model is defined, then specifications can be determined through modeling. Jama Connect uniquely provides model-based requirements to sync logically with a SysML modeling tool like Cameo No Magic.

RELATED: Traceability Matrix 101: Why It’s Not the Ultimate Solution for Managing Requirements


Step 3: Monitor for Exceptions

Live Traceability provides the ability, for the first time, to manage by exception the end-to-end product development process across all engineering disciplines. The traceability model defines expected process behavior that can be compared to actual activity to generate exceptions. These exceptions are the early warning indicators of issues that most often lead to delays, cost overruns, rework, defects, and recalls. Below is a sample exception management dashboard in Jama Connect.

traceability exception dashboard in jama connect

 

This has been part one of a two-part blog series overviewing our recent whitepaper, “How to Achieve Higher Levels of the Capability Maturity Model Integration (CMMI) with Live Traceability™” Stay tuned for part two and click HERE to read the entire thing.


MOSA


A Nod To MOSA: Deeper Documenting of Architectures May Have Prevented Proposal Loss

Lockheed loses contract award protest in part due to insufficient Modular Open Systems Approach (MOSA) documentation.

On April 6th the GAO handed down a denial to Sikorsky-Boeing proposal protest of the Army tiltrotor award to Textron Bell team. This program is called the Future Long Range Assault Aircraft (FLRAA) which is supposed to be a replacement for the Blackhawk helicopter. In reading the Decision from the GAO, it is apparent that there was a high degree of importance placed on using a Modular Open Systems Approach (MOSA) as an architecture technique for the design and development. For example, the protest adjudication decision reveals, “…[o]ne of the methods used to ensure the offeror’s proposed approach to the Future Long-Range Assault Aircraft (FLRAA) weapon system meets the Army’s MOSA objectives was to evaluate the offeror’s functional architecture.” Sikorsky failed to “allocate system functions to functional areas of the system” in enough detail as recommended by the MOSA standard down to the subsystem level which is why they were given an Unacceptable in the engineering part of their proposal response.

MOSA will enable aerospace products and systems providers to not only demonstrate conformance to MOSA standards for their products but allow them to deliver additional MOSA-conformant products and variants more rapidly. By designing for open standards from the start, organizations can create best-in-class solutions while allowing the acquirer to enable cost savings and avoidance through reuse of technology, modules, or elements from any supplier via the acquisition lifecycle.

Examining MOSA

What is a Modular Open Systems Approach (MOSA)?

A Modular Open Systems Approach (MOSA) is a business and technical framework that is used to develop and acquire complex systems. MOSA emphasizes the use of modules that are designed to work together to create a system that is interoperable, flexible, and upgradeable. To do this MOSA’s key focus is designing modular interface commonality with the intent to reduce costs and enhance sustainability efforts.

More specifically, according to the National Defense Industrial Association (NDIA), “MOSA is seen as a technical design and business strategy used to apply open system concepts to the maximum extent possible, enabling incremental development, enhanced competition, innovation, and interoperability.”

Further, on January 7, 2019, the U.S. Department of Defense (DoD) issued a memo, signed by the Secretaries of the Army, Air Force, and Navy, mandating the use of the Modular Open Systems Approach (MOSA). The memo states that “MOSA supporting standards should be included in all requirements, programming and development activities for future weapon system modifications and new start development programs to the maximum extent possible.”

In fact, this mandate for MOSA is even codified into a United States law (Title 10 U.S.C. 2446a.(b), Sec 805) that states all major defense acquisition programs (MDAP) are to be designed and developed using a MOSA open architecture.

MOSA has become increasingly important to the DoD where complex systems such as weapons platforms and communication systems require a high level of interoperability and flexibility. Their main objective is to ensure systems are designed with highly cohesive, loosely coupled, and severable modules that can be competed separately and acquired from independent vendors. This allows the DoD to acquire systems, subsystems, and capabilities with increased level of flexibility of competition over previous proprietary programs. However, MOSA can also be applied to other industries, such as healthcare and transportation, where interoperability and flexibility are also important considerations.

The basic idea behind MOSA is to define architectures that are composed of more, more manageable modules that can be developed, tested, and integrated independently. Each module is designed to operate within a standard interface, allowing it to work with other modules and be easily replaced or upgraded.


RELATED: Streamlining Defense Contract Bid Document Deliverables with Jama Connect®


The DOD requires the following to be met to satisfy a MOSA architecture:

  • Characterize the modularity of every weapons system — this means identifying, defining, and documenting system models and architectures so suppliers will know where to integrate their modules.
  • Define software interfaces between systems and modules.
  • Deliver the interfaces and associated documentation to a government repository.

And, according to the National Defense Authorization Act for Fiscal Year 2021, “the 2021 NDAA and forthcoming guidance will require program officers to identify, define, and document every model, require interfaces for systems and the components they use, and deliver these modular system interfaces and associated documentation to a specific repository.”

  • Modularize the system
  • Specify what each component does and how it communicates
  • Create interfaces for each system and component
  • Document and share interface information with suppliers

MOSA implies the use of open standards and architectures, which are publicly available and can be used by anyone. This helps to reduce costs, increase competition, and encourage innovation.

Why is MOSA important to complex systems development?

MOSA, an important element of the national defense strategy, is important for complex systems development because it provides a framework for developing systems that are modular, interoperable, and upgradeable. Here are some reasons why MOSA is important:

  • Interoperability: MOSA allows different components of a system to work together seamlessly, even if they are developed by different vendors or organizations. This means that the system can be upgraded or enhanced without having to replace the entire system.
  • Flexibility: MOSA promotes the use of open standards and architectures, which allows for greater flexibility in system development. It also allows for more competition among vendors, which can lead to lower costs and better innovation.
  • Cost-effectiveness: MOSA can reduce costs by allowing organizations to reuse existing components or develop new components that can be integrated into existing systems. It can also reduce the cost of maintenance and upgrades over the lifecycle of the system.
  • Futureproofing: MOSA allows for systems to be upgraded or modified over time, as new technology becomes available. This helps to future-proof the system, ensuring that it can adapt to changing needs and requirements.

RELATED: Digital Engineering Between Government and Contractors


How can Live Traceability™ in Jama Connect® help with a MOSA?

Live Traceability™ in Jama Connect® can help with MOSA by providing mechanisms to establish traces between MOSA architecture elements and interfaces, and the requirements and verification & validation data that support them. Live Traceability is the ability to track and record changes to data elements and their relationships in real-time. This information can be used to improve documenting system design, identify potential issues, and track changes over time.

Here are some specific ways that Live Traceability can help with MOSA:

  • Status monitoring: Live Traceability allows systems engineers to monitor the progress of architecture definition in real-time, identifying issues from a requirements perspective as they arise. This can help to increase efficiency and ensure that the stakeholders are aware of changes as they occur.
  • Digital Engineering: Live Traceability can help with digital engineering by providing mechanisms to capture architectures, requirements, risks, and tests including the traceability between individual elements.
  • Configuration and Change Management: Live Traceability can help with change management by tracking changes to system architectures and interfaces including requirements that are allocated to them. This can help to ensure that changes are properly documented and that they do not impact other parts of the system. Baselining and automatic versioning enable snapshots in time that represent an agreed-upon, reviewed, and approved set of data that have been committed to a specific milestone, phase, or release.
  • Testing and Validation: Live Traceability can help with verification and validation to ensure that system meets specified requirements and needs. This can help reduce risk by identifying issues early in the development process and ensuring that the system meets its requirements.
  • Future-proofing: Live Traceability can help to future-proof the system by providing a record of system changes and modifications over time. This can help to ensure that the system remains flexible and adaptable to changing needs and requirements.

In summary, Live Traceability in Jama Connect can help with MOSA by providing real-time visibility into the traceability between architectures, interfaces, and requirements. It can help to improve documenting the system design, identify potential issues, and track changes over time, which are all important considerations for MOSA.



Software Validation

This is part two of a two-part series on software validation and computer software assurance in the medical device industry.

Practical Guide for Implementing Software Validation in Medical Devices: From FDA Guidance to Real-World Application – Part 2

In our previous blog post, we reviewed the top things to know about software validation and computer software assurance in the medical device industry. In this installment, we’ll take a closer look at computer software validation and provide tips and tools to manage your software in a compliant and efficient manner.

Main points

The FDA Draft Guidance on Computer Software Assurance

In September, 2022, the FDA released its draft guidance “Computer Software Assurance for Production and Quality System Software.” While in draft form, the final form for most guidance typically mirrors the draft document. The 2022 supplements the 2002 guidance on Software Validation, except it will supersede Section 6 (“Validation of Automated Process Equipment and Quality System Software”). In this guidance the FDA uses the term computer software assurance and defines it as a “risk-based approach to establish confidence in the automation used for production or quality systems.”

There are many types of software used and developed by medical device companies, including those listed below. The scope of the 2022 draft guidance is on software used in production and quality systems software, as highlighted below.

  • Software in a Medical Device (SiMD) – Software used as a component, part, or accessory of a medical device;
  • Software as a Medical Device (SaMD) – Software that is itself a medical device (e.g., blood establishment software);
  • Software used in the production of a device (e.g., programmable logic controllers in manufacturing equipment);
  • Software in computers and automated data processing systems used as part of medical device production (e.g., software intended for automating production processes, inspection, testing, or the collection and processing of production data);
  • Software used in implementation of the device manufacturer’s quality system (e.g., software that records and maintains the device history record);
  • Software in the form of websites for electronic Instructions for Use (eIFUs) and other information (labeling) for the user.

RELATED: Understanding Integrated Risk Management for Medical Devices


Understanding Your Software’s Intended Use and Risk-Based Approach

Defining the software’s intended use is an important aspect of managing your organization’s computer software assurance activities.

This then allows you to analyze and document the impact to safety risk if the software failed to perform to meet its intended use. One aspect that I appreciate the FDA adopting is the concept of ‘high process risk,’ when the failure of the software to perform as intended may result in a quality problem that foreseeably compromises safety and an increased medical device risk. The guidance has a number of examples to illustrate examples of high process risk and not high process risk. Previously, risk that purely a high risk to compliance only (i.e., no process risk) was essentially treated the same as risk that could compromise safety.

Commensurate with the level of process risk, guidance, and examples are presented to outline expected computer assurance activities, including various levels of testing, and level of documentation. Computer assurance activities for software that poses a high level of process risk include documentation of the intended use, risk determination, detailed test protocol, detailed report of the testing performed, pass/fail results for each test case, any issues found and their disposition, among others.

In contrast, guidance is provided that computer software assurance activities that pose no level of process risk can consist of lower level of testing, such as unscripted ad-hoc or error guessing testing. Prior to this guidance, the expectation was fully scripted protocols and documented results for each test case, which felt burdensome. For example, having to script out protocol steps to include user log-in steps for an electronic QMS module that facilitated the nonconformance process, which did not have a high level of process risk. The usage of the concept of high process risk and acknowledging that unscripted testing can be appropriate in times of low risk, will certainly help lessen the burden of compliance, without compromising safety.

Managing Your Software Efficiently

For those that think analytically like me, once can easily see the value of a Trace Matrix to keep my organization’s software organized and ensure the intended use, risk assessment, planned computer software assurance activities, and outcomes documented.

Similar to how it efficiently traces your medical device design inputs to outputs and links to your risk management, Jama Connect® is a great tool to also trace and manage all your software and software validation and computer software assurance activities. This includes documentation of the intended use, risk determination, and test protocols and reports performed. With its new validated cloud offering, SOC2 certification, and available Jama Connect Validation Kit, Jama Software also provides the tools and evidence you need to meet your organization’s computer software assurance activities.


RELATED: Jama Connect® for Medical Device Development Datasheet


Closing

Developing a risk-based process for software management, including software validation and computer software assurance, is key to staying compliant. Staying organized and using a tool like Jama Connect helps you do so efficiently.

To read part one of this blog, click HERE.


Aerospace & Defense

In this blog, we recap the “Launch Your Aerospace & Defense Product Development Processes with Jama Connect®” webinar.


In this webinar, we discuss exciting new features in our updated Jama Connect® for Aerospace & Defense framework. These updates include refreshed solutions for cybersecurity, the DoD Range Safety Requirements Library, and other libraries of standards.

Also, Cary Bryczek, Solutions Director for Aerospace & Defense at Jama Software®, shares best practices in the Jama Connect platform and demonstrates significant new features that can help you further enhance your aerospace and defense product development processes, including:

  • ARP 4761A – Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment
  • DO-326A – Airworthiness Security Process Specification
  • US CFR Parts 21-57 Pre-imported Libraries and Usage
  • Defense MBSE and Digital Engineering Guidance
  • NASA and Air Force Range Safety Requirements
  • European Cooperate with Space Standards (ECSS) Pre-Imported Libraries

Below is an abbreviated transcript and a recording of our webinar.


Launch Your Aerospace & Defense Product Development Processes with Jama Connect®

Cary Bryczek: Let’s get started. So the Airborne Systems Solution. So when we say solution, it’s really a complete set of frameworks, example projects and the procedural documentation that goes along with that. It’s really intended to accelerate your implementation of Jama Connect, especially those that are developing Airborne Systems and the Airborne Systems components that are going to live on these aircraft. When you utilize these frameworks, you can either have zero set up time, so we’re developed the solution to align with the standards and you can also tailor it. So your consultant who does team with you could help you tailor it to meet your very specific business needs as well. So it’s really designed for any organization, whether you’re a startup in the Airborne Systems world or whether you’re a longtime developer of aircraft.

The Airborne Systems Solution is really designed to help you ease the path to regulatory compliance, to help the engineers produce the evidence and collect that evidence in coordination with the regulatory requirements and the industry standards that are used that are requiring the acceptable means of compliance. Today’s. In today’s world, there is a lot of new engineers that are being employed in Airborne Systems development. And really this particular template is helpful to them because it really gets them to understand “How am supposed to do development?” We all know that Airborne Systems development has the most onerous and rigorous standards of any industry. And teaching our new engineers is very time-consuming. So having this template with all of the guidance built into it and the procedure guide really helps our new engineers to get started.

So there’s three components to the Airborne Systems Solution that what we call the data set, a procedure guide, and the success program. The data set essentially is what you get when you install Jama and it has the templates, it has a ready to use configuration that matches those regulations. It has all of the item types, all of the reports, all of the best practices built right in. And then the procedure guides and the documentation of the reports essentially show you how the Airborne Systems template is meeting the industry standards. So how does it meet ARP4754, how do you use the solution to meet DO-178. That’s sort of a thing.


RELATED: Jama Software® Delivers Major Enhancements to the Jama Connect® for Airborne Systems Solution


Bryczek: And then we also pair our solution with specific consulting. So our consultants already are very familiar with the regulations with working with our customers that have been delivering and developing Airborne Systems already, as well as systems engineering best practices. Some of our customers have interesting supply chain needs. And so they might want to use an additional tool that we package called data exchange. That’s just an add-on to the solution.

So when we look at the framework itself, there are a lot of industry standards that we support. These industry standards are the acceptable means of compliance that the FAA and EASA recognize in order to meet type certifications. So we have those processes that come right out of those standards built right in to the framework. So that framework consists of specially configured item types, pick lists and views of that information. Our relationship rules are aligned to the types of trace matrices these particular standards are calling for. We have workflows and guidance for how you conduct reviews of information as well. We have the libraries of standards, so if you need to comply with the different CFR parts, we actually have those pre imported. This is something new that we’ve added and we’ll talk about that a little bit more. The framework includes these document export templates as well as risk templates and analysis templates and more.

Now this is a company with a procedure guide. So along with not only just the template itself in Jama, we give you the procedure guide. You can take this guide and tailor it to meet your specific needs as well. This procedures guide is updated. So as a subscriber to the Airborne Systems Solution, any updates we make or new releases like what we have right now is included with your subscription. It just makes it easy for everyone to kind of understand “How do I use Jama if I need to meet these industry standards?”


RELATED: Digital Engineering Between Government and Contractors


Bryczek: Also with this particular release is the configuration and update guide. So this is new this time around. This particular guide gives a very detailed description of the entire dataset. It includes all of the types that we’ve defined, all of the pick lists that are defined, all of the relationship rules, all of the workflows. So if you need to update from your existing Airborne Systems Solution and take in aspects of the new release, it makes it really easy for you guys to update as well. This might be something as well… So if you tailor from your existing Jama solution and you want to keep track of that, something like this might be a really great way for you to document your own implementation of Jama itself.

So exciting. This is one of the new things. So we have for cybersecurity, DO-326A is an acceptable means of compliance for doing security analyses. There are a significant number of new item types that have been added to the solution that comprise our cybersecurity solution as well as how do you really do the airworthiness security analysis. Essentially there are seven steps to do this particular type of analysis. This really starts with developing your PSecAC. And for those of you who are maybe new to Airborne Systems development or are not familiar with DO-326 or cybersecurity, it is a process that’s sort of done in tandem with both the system development and safety. But this is different in that this is analyzing the intentional unauthorized electronic interaction. So it’s really designed to find ways that hackers or bad actors might be accessing parts of the Airborne Systems that you don’t want them to.

To watch the entire webinar, visit
Launch Your Aerospace & Defense Product Development Processes with Jama Connect®


Software Validation, Medical Device

Practical Guide for Implementing Software Validation in Medical Devices: From FDA Guidance to Real-World Application – Part I

Intro

This is Part 1 of a 2-part series on software validation and computer software assurance in the medical device industry.

While it is clear that software validation is required by regulation in the US and elsewhere (e.g., the EU (European Union)), as regulated by the MDR and IVDR), how to execute continues to cause challenges, both for established medical device companies, and those just entering the medical device industry.

Between the different types of software, variations in terminology, type, and source of software (developed in-house, or purchased OTS, customized OTS (COTS), SOUP, etc.) advances in software technology, and evolving guidance of the FDA (Food and Drug Administration) and other regulatory bodies, it’s no wonder that implementation of software validation practices and procedures causes confusion.

This blog outlines the top things to know about software validation and computer software assurance as you implement practices and procedures for your organization in a way that is compliant and brings value.

Are you building or updating your software validation practices and procedures? If so, read on!

Top Things to Know About Software Validation and Computer Software Assurance

#1. Yes, there are different terms, methods, and definitions for software validation.

For the purposes of this blog, we’ll use the FDA’s definition of software validation, from their 2002 guidance. The FDA considers software validation to be “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.”

At a high level, this makes sense. The confusion starts when folks try to define how that confirmation is performed and documented. How do I determine and document the requirements? How detailed do I need to go to my user needs and intended uses? For each feature? What kind of objective evidence? What if I’m using software to automate test scripts? Do I have to qualify the testing software? Turning to guidance and standards for a “standard” set of practices can add to the confusion. Even within just the medical device industry, there are multiple regulations and standards that use similar and at the same time, slightly different concepts and terminology. Examples include the IQ/OQ/PQ (Installation Qualification / Operational Qualification / Performance Qualification) analogy from process validation, black box testing, unit testing, just to name a few.

Before getting overwhelmed, take a breath and read on to point #2.


RELATED: How to Use Requirements Management as an Anchor to Establish Live Traceability in Systems Engineering


#2. Start with the regulations and standards.

While the multiple regulations and standards around software validation cause confusion, they are also a good place to start. I say that because at a high level they are trying to achieve the same thing- software that meets its intended use and maintains a validated state. Keeping the intent in mind can make it easier (at least it does for me) to see the similarities in the lower-level requirements between any terminology differences and not be as focused on making all the terminology match.

To start, select those regulations and guidance from one of your primary regulatory jurisdictions (like the FDA for the US). In the US, three main FDA guidance documents to incorporate are 1) General Principles of Software Validation; Final Guidance for Industry and FDA Staff, issued in 2002; 2) Part 11, Electronic Records; Electronic Signatures – Scope and Application, issued in 2003.

The 3rd guidance is relatively new, a draft guidance released in September, 2022, Computer Software Assurance for Production and Quality System Software. While in draft form, the final form for most guidance typically mirrors the draft document. The 2022 supplements the 2002 guidance, except it will supersede Section 6 (“Validation of Automated Process Equipment and Quality System Software”). It is also in this guidance that the FDA uses the term computer software assurance and defines it as a “risk-based approach to establish confidence in the automation used for production or quality systems.”

Once you’ve grounded yourself in one set, then you can compare and add on, as necessary, requirements for other regulatory jurisdictions. Again, focus on specific requirements that are different and where the high-level intent is similar. For example, in the EU, Regulation (EU) 2021/2226 outlines when instructions for use (IFUs) may be presented in electronic format and the requirements for the website and eIFUs presented.

#3. Start on the intended use and make your software validation and computer software assurance activities risk based.

Start with documenting the intended use of the software and associated safety risk if it were to fail. Then define the level of effort and combination of various software validation activities commensurate with the risk. Software and software features that would result in severe safety risk if it fails are to be validated more rigorously and have more software assurance activities than software that poses no safety risk.

Here are some examples of intended use and the associated safety risk.

Example 1: Jama Connect®, Requirements Management Software

Intended Use: The intended use of Jama Connect is to manage requirements and the corresponding traceability. The following design control aspects are managed within Jama Connect, user needs, design inputs, and traceability to design outputs, verification and validation activities. Risk analysis is also managed in Jama Connect.

Feature 1 Intended Use: Jama Connect provides visual indicators to highlight breaks in traceability. For example, when a user need is not linked to a design input, or vice versa.

Risk-based analysis of Feature 1: Failure of the visual indicator would result in the possibility of not establishing full traceability or missing establishment of a design control element like a design input. This risk is considered moderate as manual review of the traceability matrix is also performed as required by the Design Control SOP. Reports are exported from Jama Connect as pdfs, reviewed externally to the software, and then approved per the document control SOP.


RELATED: Traceability Score™ – An Empirical Way to Reduce the Risk of Late Requirements


Example 2: Imbedded software in automated production equipment

Intended use: The intended use of the software is to control production equipment designed to pick in place two components and weld them together.

Risk-based analysis: This is a critical weld that affects patient safety if not performed to specification. Thus, the software is considered high risk.

#4. Software Validation and computer software assurance is just one part of the software life cycle… you need to be concerned about the whole lifecycle.

There is more to software development and management than just the validation. Incorporate how custom software will be developed, how purchased software will be assessed to determine the appropriate controls based on risk, including verification and validation activities, and revision controlled.

#5. Have different procedures and practices for the different types of software.

This is a good time to consider how different types of software in your organization will be managed, and it’s not a one-size fits all approach. A best practice is to have separate practices and procedures; one for software in a medical device (SiMD) and software as a medical device (SaMD) and at least one other procedure and set of practices for other software, like software used in the production of a device, software in computers and automated data processing systems used as part of medical device production, or software used in implementation of the device manufacturer’s quality system.

Closing

Stay tuned for Part 2 of this 2-part blog series, where we’ll dive deeper into computer software assurance, highlight the risk-based approach, and provide tips and tools to manage your software in a compliant and efficient manner.