You’ll gain more confidence that your recently-released products avoid recalls, fines, or worse if you perform failure mode and effects analysis (FMEA) as part of your risk management.
That’s according to Bethany Quillinan, senior quality systems consultant with Oregon Bioscience Association, who presented on the topic in our recent webinar, “Best Practices for Improving Risk Management Using FMEA.”
Over the course of the hour-long webinar, Quillinan and Jama’s senior product manager, Joel Hutchinson, provided some key elements around the value of risk management and FMEA, and the importance of modernizing your risk management process to accommodate.
At the conclusion of the webinar’s presentation, the pair answered a range of questions from the hundreds of participants in attendance. Unfortunately, they weren’t able to get to everyone’s questions, but luckily they took some extra time afterward to answer some of those remaining, touching on everything from risk mitigation in aerospace to specific standards like ISO 14971.
We’ve compiled the questions (some of which have been slightly modified for clarity) below and encourage you to check out the full webinar and Q&A session here.
Q: Given that FMEA teams in companies can be non-permanent and have a fluid scope, which role should own and drive the activity overall?
Bethany Quillinan: I think the ownership role should align with those of the design/new product introduction process. If there is a project lead, then that person would make the most sense. There may also be a particular functional group who is given ownership of risk management, like Quality or Compliance teams. I think it really depends on the organization. What we want is alignment and integration within the design process, not a separate “silo” for risk management.
Q: Do you have any suggestions for criteria that would help when selecting a good FMEA facilitator?
Quillinan: A good facilitator will be someone who is well-versed in the FMEA process and the organization’s particular methodology, rating scales, etc. Additionally, the facilitator should be skilled in well, facilitation, and by that I mean having meeting management skills — things like tracking time, noticing when energy is low and a break is needed, drawing out quieter members, dealing with overly-dominant members, and generally equalizing the field. The facilitator should not get involved in the “content” of the FMEA, just the process itself. Ideally, there is also a content leader who can keep the team focused on the scope of the FMEA. That said, the facilitator should be aware of the scope, in case the team gets way off track and no one is calling it.
Gain Confidence in Your Risk Management Plan with Jama Connect. Learn how.
Q: Any suggestions for facilitators to keep people from taking things personally?
Quillinan: One engineer I worked with who had facilitated a lot of FMEAs shared a good technique — to ask people, “How could you make it fail?” And that turned around defensiveness to creativity. From a facilitation aspect, I think emphasizing that the analysis is “potential,” that we’re not saying it’s going to happen, but that it’s a just a possibility to consider. Depersonalizing statements can help — talk about “the design” vs. “your design,” for example. If someone is super defensive, the facilitator may need to call a break to let things cool down and talk to the person offline. Diplomacy is important!
Q: What you talked about is used in the automotive field which is very effective, but the aerospace industry tends to use FMEAs focused on managing the effects and quantitative failure rates (assuming constant failure rates). How do you get the benefits of automotive FMEA and, at the same time, satisfy aerospace FMEA requirements?
Quillinan: Having quantitative failure rate data to inform the discussion of the probability of occurrence is a big benefit. Failure mode, effects and criticality analysis (FMECA), where “C” is criticality analysis brings in more of the quantitative factor, and I see that terminology used more in aerospace. Satisfying specific Aerospace requirements is somewhat out of the scope of this presentation. In the context of the AS9100 quality management system standard, the requirement is to perform risk management, and FMEA is not prescribed but is a possible tool. Bottom line, I see FMEA as a prioritization tool rather than a reliability prediction. (In the early days, the military hoped to use FMECA to calculate an actual reliability metric, but it didn’t work out that way.)
Learn why Frost & Sullivan likes Jama Connect as a modern solution for risk management.
Read the brief.
Q: What happens when multiple users are creating individual risk analyses? How do they collaborate if they are analyzing similar things?
Joel Hutchinson: As Bethany mentioned during her presentation, we understand that there’s going to be overlap in various FMEAs. An example of this is subsystems, which may have additional functionality that is only present at the system level. Risk management in Jama Connect helps the cross-functional team that works together to perform a risk analysis and has the ability to add view-only roles for context. One way of addressing this would be for the moderators of the two overlapping risk analyses to add each other as view-only roles to their respective analyses. This way, each cross-functional team is aware of what the other team is doing and how they’ve scoped their risk analysis.
Do you know where, when, and how intended Use FMEA (uFMEA for medical devices, per IEC 62366-1:2015) and software FMEA (life cycle requirements for medical device software per IEC 62304:2006+A1:2016) integrate into the requirements management process and product development timeline?
Quillinan: Generally speaking, risk management for usability, software, or any other aspect of a design should be integrated with the design process as soon as these aspects enter the design conversation. For example, early on during the initial “concept” phase, we need to be thinking about usability risks to inform the design concept and assist in evaluating different design routes to take. I also think useability/human factors could potentially be considered at any level of the product, so it could follow the same general timeline I showed in the presentation.
Likewise, for software, at the concept phase, the conversation will likely include software needs for satisfying customer expectations. From a systems standpoint, I feel the sooner the better. (I often use the initial rollout of the Healthcare.gov website for the Affordable Care Act (ACA) marketplace as an example of a “silo’d” approach to design. Each module was developed and tested in isolation and the “validation” of interfaces between modules was when it went live… and crashed.
We have to remember that the ultimate focus is at the system level — the end-user interaction with the product. In a medical device setting, most consultants are going to advise you to build in human factors considerations throughout the design process rather than waiting until the final human factors validation test on the finished design, for all the same reasons I described in the presentation.
The FDA guidance document “Applying Human Factors and Usability Engineering to Medical Devices” (Feb 3, 2016) is a helpful document and can complement ISO 14971.
Find out how to better manage medical device development in accordance with ISO 14971.
Read our guide.
Q: Risk Priority Numbers (RPNs) seem to always be subjective from one party to another, specifically the occurrence and detection. The occurrence is difficult to gauge when the failure mode and risks are new or when little-to-no history is available. For detection, we’ve had difficulty assessing the controls in place and how to rate them in terms of efficiency.
Quillinan: One way to help make the RPNs more objective is to establish and use consistent rating scales with descriptions that are relevant to your organization’s products and processes.
When there is no history and risks are new, the typical practice is to be conservative and give them high ratings. I often find that people have difficulty distinguishing prevention controls from detection controls.
Preventive controls typically act on the inputs to a process or are the controls during the process — think of mistake-proofing, for example not being able to choose an obsolete revision of a component for a bill of material (BOM) or having design guidelines to follow. Detection controls are after the fact and are inspections of the output of a process. In design, a typical detection control is a second person (or more) performing a design review.
Q: What is the relationship between design failure mode and effect analysis (DFMEA) and process failure mode and effect analysis (PFMEA), in terms of the failure modes and causes?
Quillinan: In DFMEA, we are looking at how the product could fail, and causes are from the design process itself; the assumption is that the process is being run correctly. In PFMEA, we’re looking at how the process could fail, assuming the product is designed correctly. Of course, this depends on FMEA scope. If we’re looking at a design for manufacturability, then we’re looking at how the design process could fail in terms of designing the product in a way that it can be easily and efficiently built.
Hear the full presentation, as well as questions and answers to many other FMEA and risk management questions, in our webinar, “Best Practices for Improving Risk Management Using FMEA.”
- Strategies for Mitigating Software Defined Vehicle (SDV) Development Risks and Reducing Costly Recalls - November 19, 2024
- The Clear Choice: Why Jama Connect® Surpasses Codebeamer for Requirements Management and End-to-End Traceability - November 12, 2024
- Jama Connect® Receives Buyer’s Choice for 2025 on TrustRadius! - October 30, 2024