Decoding The FDA’s Draft Guidance On Computer Software Assurance For Medical Devices & Bio/Pharma
The current state of validation is seen as a hindrance to quicker deployments, with an emphasis on adhering to thorough documentation practices instead of building systems that align effectively with their intended use.
A risk-based approach to validation has been around for some time. However, life sciences companies have been challenged with identifying software risks and the desired level of validation effort. Simultaneously, medical device manufacturers have expressed a desire for greater clarity regarding the FDA’s expectations for software validation.
In a rapidly evolving landscape of technology and regulation, the FDA released a draft guidance on computer software assurance in 2022 that promises to reshape the validation of automated data processing system and quality system software in the pharma/medical device industry and to enhance the quality, availability, and safety of medical devices. In this article, I will walk you through the key elements of the guidance, providing valuable insights for professionals navigating the complexities of automated processes and quality system software.
Guidance Supersedes Section 6 Of Software Validation Guidance Of 2002
The forthcoming guidance is set to supersede Section 6 of the general principles of software validation guidance from 2002, signaling a paradigm shift in the approach to validating automated data processing system and quality system software. This guidance provides crucial recommendations applicable to the requirements of 21 CFR 820.70(i), focusing on automated processes integral to production and quality systems.
Understanding The Regulatory Scope
The guidance emphasizes the necessity for manufacturers to validate software used in production or the quality system for its intended use. However, it explicitly excludes software as a medical device (SaMD) or software in a medical device (SiMD) from its scope. The document prompts manufacturers to thoroughly assess whether the regulatory requirement applies to their specific software.
A central theme revolves around a risk-based approach, urging manufacturers to delve into the intended use of individual features, functions, and operations within their software. The guidance recognizes the complexity of software used in production or the quality system, often comprising multiple intended uses. It encourages manufacturers to conduct different assurance activities tailored to these specific elements based on a meticulous risk assessment.
The guidance outlines the components of a robust record of assurance activities, stressing the need for objective evidence. It recommends capturing the intended use, risk determination, details of assurance activities conducted, issues found, and a conclusion statement declaring the acceptability of results.
The guidance distinguishes between process risks and medical device risks. Process risks pertain to potential compromises in production or the quality system, while medical device risks focus on the potential harm to patients or users. The document emphasizes the FDA’s concern for software features, functions, and operations that pose both high process risk and a consequential medical device risk, aligning assurance activities with the severity of potential issues.
Manufacturers are encouraged to leverage existing process controls throughout production, particularly for lower-risk software features. The guidance emphasizes the importance of data and information collected by the software for continuous monitoring and issue detection post-implementation. It highlights the use of computer system validation tools, iterative testing cycles, and continuous monitoring as integral elements of a comprehensive assurance approach.
RELATED: Jama Connect® for Medical Device & Life Sciences Development Datasheet
Establishing The Appropriate Testing Methods5
FDA always recommended leveraging all the vendor documentation when we were using computer system validation (CSV); now, in computer software assurance (CSA), FDA is strongly recommending leveraging all the vendor documentation and performing the remaining portion of testing in scripted and unscripted testing that is not covered in vendor testing.
FDA introduced new nomenclature for testing methods in CSA, scripted testing and unscripted testing, which are adopted from EC/IEEE/ISO 29119-1 First edition 2013-09-01: Software and systems engineering – Software testing – Part 1: Concepts and definitions, Section 4.94 to stay aligned with current practices and standards from IEEE for software testing.
The terms IQ, OQ, PQ relate to the original general principles of software validation guidance. The discussion at that time emphasized that IQ, OQ, and PQ, while relevant from a process standpoint and process-validation perspective, may not be directly applicable when dealing with software validation cases. It’s not a situation where these terms are irrelevant or inapplicable. Manufacturers always have had the freedom to structure their processes to meet the requirements of their quality system or business objectives. The use of these terms is optional, and if they provide clarity for the organization, they are free to adopt them. However, it hasn’t been explicitly stated before that these terms are crucial or necessary in the context of software validation.
Now, let’s dive into what unscripted testing and scripted testing are in terms of current software testing and how we can adapt to CSA activities.
Unscripted Testing
Unscripted testing is a software testing approach characterized by the absence of predefined test scripts or detailed test cases.
For context, current software testing practices say we don’t need any documentation, but in regulated companies we need to have minimum documentation. You are still laying out some objectives that need to be exercised, accomplished, or captured in some way, shape, or form. And within that context, there is a lot of flexibility with regard to developing a protocol established in 21 CFR 820.70(i), which states, “When computers or automated data processing systems are used as part of production or the quality system, the manufacturer shall validate computer software for its intended use according to an established protocol.”
Unscripted testing is divided into three types:
- Ad hoc testing: Ad hoc testing2 is an informal and unstructured software testing type aimed at disrupting the testing process to identify potential defects or errors in the early stages. This type of testing is typically unplanned in that it does not follow any documentation or test design techniques to formulate test cases. This type of testing tests features and functions with no test plan.
- Error guessing: Error guessing3 is a testing technique based on the tester’s experience, where they use their expertise to speculate or guess about potential problem areas within the application. This method requires a skilled and experienced tester. This type of testing tests failure modes with no test plan.
- Exploratory testing: Exploratory testing4 is a manual software testing technique conducted without a formal plan, allowing testers to deviate from scripted routines (repetitive and monotonous). It empowers testers to apply their skills creatively. Successful exploratory testers need critical thinking, creativity, and strong domain and technical knowledge.
While exploratory testing may seem unplanned, it isn’t random. It involves applying knowledge and expertise. Deep knowledge of the system under test is crucial for effective exploratory testing.
Establish high-level test plan objectives (no step-by-step procedure is necessary). Benefits of exploratory testing include:
- Identifying edge cases and unexpected defects that scripted testing might overlook.
- Testing from a user perspective to enhance user experience and usability.
- Encouraging critical thinking among testers, preventing monotony, and improving software quality.
- Increasing test coverage by exploring various scenarios and uncovering new defects.
- Testing software in its early development stages to catch bugs early, even without formalized, scripted tests.
- Providing flexibility to try new testing techniques, contributing to overall testing improvement.
Scripted Testing
Scripted testing refers to a software testing approach where the tester follows a predefined set of written instructions or scripts during the execution of test cases. Scripted testing includes both robust and limited scripted testing.
1: Robust scripted testing
This method of testing emphasizes ensuring that the testing process is not only thorough but also capable of being repeated consistently, traces back to defined requirements, and can be audited for transparency and accountability. The focus is on establishing a strong and reliable testing framework that contributes to the overall quality and reliability of the computer system or automation under examination. The test script should contain the following at a minimum:
- test objectives
- test cases (step-by-step procedure) ·
- expected results
- independent review and approval of test cases
2: Limited scripted testing
This method of testing customizes the testing strategy based on the risk profile, utilizing scripted testing for high-risk features or operations, while employing unscripted testing for low- to medium-risk elements. The goal is to create a balanced assurance effort that addresses varying levels of risk within the computer system or automation, optimizing testing resources accordingly. The test script should contain the following at a minimum:
- test cases (step-by step procedure) identified
- expected results for the test cases
- Identification of the unscripted testing applied
- independent review and approval of test plan
Leverage Technological Advances For Automated Traceability Testing
The guidance acknowledges the advancements in digital technology, advocating for electronic records over manual or paper-based documentation for efficiency. Delve into the meticulous documentation requirements outlined in the draft guidance. Discover how advances in digital technology can streamline the documentation process. Explore the FDA’s recommendation to leverage automated traceability testing and electronic records, reducing reliance on manual or paper-based documentation.
Embrace A Risk-Based Approach
The FDA’s draft guidance on computer software assurance is a call for a risk-based approach to instill confidence in automation used for production or quality systems. The four-step approach involves identifying the intended use, determining a risk-based strategy, selecting appropriate assurance activities, and establishing a comprehensive record. The guidance also invited manufacturers to actively engage, provide comments, and seek clarity on this transformative document that aims to harmonize technology and regulatory expectations in the ever-evolving medical device industry.
RELATED: Traceable Agile™ – Speed AND Quality Are Possible for Software Factories in Safety-critical Industries
Key Takeaways From The Draft Guidance
- Is the draft guidance only for medical device companies that use software as a part of medical device production? No, it also applies to any other software applications. This draft guidance was prepared by the CDRH, CBER in consultation with CDER, Office of Combination Products, and Office of Regulatory Affairs. Specifically, this draft guidance provides recommendations regarding the requirements outlined in 21 CFR 820.70(i).5
- This will supersede Section 6, “Validation of Automated Process Equipment and Quality System Software”, of the FDA’s software validation guidance, but it doesn’t replace “General Principles of Software Validation.”
- Leverage the testing that is already completed by vendors or any testing that was done as part of your SDLC; don’t repeat the testing and always take credit for whatever is already completed.
- CSA does not replace the existing computer system validation (CSV); instead, CSA is the lean approach of doing CSV by leveraging/using the existing vendor documentation.6
- Using screenshots to establish the record associated with the assurance activities is not necessary, as you can use any system logs, audit trails, and any other electronic sources of data generated by the system.
- Regulated companies don’t have to wait until this CSA draft guidance becomes effective; they can start implementing CSA immediately, as per the FDA.
Conclusion
If implemented correctly, CSA has the potential to significantly impact the industry and business operations. It can lead to a substantial return on investments, reducing costs by 50% (in my experiences) and saving both time and resources. Moreover, CSA contributes to enhancing the overall quality process through the application of critical thinking.
This article reflects the author’s viewpoints, opinions, and personal experience, and does not necessarily reflect those of his company or shareholders.
About The Author:
Hemadri Doma is a seasoned life sciences professional with more than nine years of expertise in the pharmaceutical and medical device industry. He is a subject matter expert in computer systems validation (CSV), computer software assurance (CSA), data integrity, equipment validation, process automation, artificial intelligence, pattern recognition techniques, and facilities validation. He has served in roles spanning engineering, facilities, information technology (IT), QC laboratory systems, process automation, validation, and quality processes. Doma currently holds the position of QA computer system validation engineer III at Tolmar Inc.
- Fueling Progress: Solutions to the Biggest Challenges Slowing Oil & Gas Projects - November 26, 2024
- Integrate DoD MIL-STD-882E Risk Management with Systems Engineering Using Jama Connect® for Defense Systems - November 25, 2024
- Strategies for Mitigating Software Defined Vehicle (SDV) Development Risks and Reducing Costly Recalls - November 19, 2024