How The EU AI Act Impacts Medical Device Manufacturers
The EU AI Act (Regulation (EU) 2024/1689) is a landmark legislation that will shape the future of AI in Europe and is expected to be the baseline for similar legislation in other countries/regions. It will have a significant impact on the AI industry and society, as it will set new standards and rules for the development and use of AI systems, as well as create new opportunities and challenges for innovation and competitiveness. The EU AI Act attempts to regulate AI in a way that balances the benefits and risks of this transformative technology. The AI Act also will impact other industry sectors like the medical device industry for devices that include AI technology. Medical device manufacturers will have to comply with the AI Act.
All AI systems are classified in four risk classifications: unacceptable, high, limited, and minimal.
- AI systems for which the risk is deemed unacceptable are banned from the market. These are, for example, untargeted scraping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, and biometric categorization to infer sensitive data, such as sexual orientation or religious beliefs.
- High-risk AI systems have a significant impact on people’s lives or rights, such as healthcare, education, law enforcement, or public services. These must comply with strict requirements, such as data quality, transparency, human oversight, and accuracy. High-risk AI systems will have to undergo a conformity assessment by a notified body that is specifically designated for AI systems before they can be commercially available.
- Limited-risk AI systems can pose some risk to users, like chatbots, emotion recognition, or biometric categorization. These must provide clear information to users and allow them to opt out.
- Minimal-risk AI systems like spam filters or video games are not expected to pose any risk. Although they are largely exempt from the regulation, they still must follow the general principles of safety and fairness.
RELATED: Jama Connect® for Medical Device & Life Sciences Development Datasheet
The AI that is part of a medical device will fall in the high risk category and will require oversight by a notified body as it can have a significant impact on people’s lives.
The EU AI Act was published July 12, 2024, and will apply by August 2, 2026. However, some elements of the act become mandatory by August 2, 2025.
Providers of high-risk AI systems must implement a quality management system (QMS) that covers the following during the lifetime of the AI system:
- Risk management to identify and mitigate the potential risks of the AI system to health, safety, and fundamental rights during the lifetime of the AI system
- Data governance of training, validation, and testing data sets
- Development and maintenance of the technical documentation: for products that already require technical documentation under other legislation (e.g., medical devices), the AI related technical documentation must be included in the existing technical documentation.
- Data logging to ensure AI systems keep track of data during their lifetimes
- Labeling providing information on the functioning of the AI system and its operation and maintenance
- A design that ensures appropriate levels of accuracy, robustness, safety, and cybersecurity
- Post-market monitoring, including collection and reporting of incidents and malfunctioning to the relevant authorities
The implementation of the QMS will be mandatory by August 2, 2025, as well as the identification of the economic operators.
The EU AI Act identifies a number of economic operators in the high-risk AI system life cycle, who all have specific obligations related to high-risk AI systems.
- In addition to the obligations related to the QMS, providers of AI systems that are not based in the EU must appoint an authorized representative (AR) who is based in the EU and confirm the AR’s mandate from the provider.
- The AR has to verify the DoC, the technical documentation, and that the appropriate conformity assessment procedure has been completed and is the liaison with the competent authority.
- Importers of AI systems have to ensure the AI system conforms with the AI regulation and provide their name and address with the AI system. Importers have to work with the competent authorities on any actions initiated by them to reduce and mitigate risks posed by the AI system.
- Distributors have to verify the AI system is accompanied by the appropriate instructions for use and bears the CE marking.
- Users of AI systems have to use them in compliance with the instructions for use. They have to report any serious incident, unacceptable risks, or malfunctioning to the national supervisory authorities, the AI system provider, importer, or distributor.
Most medical devices manufacturers will have their economic operators (AR, importer, distributor) already identified and in place as this is also a requirement under the medical device regulations. However, it will be important that contracts with those economic operators are updated to include the obligations related to the AI Act.
The obligation to report complaints, as part of the post-market surveillance becomes mandatory by August 2, 2025.
The EU AI Act foresees the creation of a European database where all providers, ARs, and the AI systems will be registered. It establishes a governance structure for the oversight and enforcement of the regulation. It creates a European Artificial Intelligence Board (EAIB), composed of representatives from national authorities and the European Commission, to provide guidance, advice, and recommendations on AI matters. It also designates national competent authorities and notified bodies to monitor, audit, and certify AI systems and their providers.
To support innovation, member states will establish AI regulatory sandboxes where providers of AI systems can develop, test, and validate innovative AI systems in a controlled environment. Spain, for instance, has launched one of the first pilot sandboxes under its State Agency for the Supervision of Artificial Intelligence (AESIA). This pilot sandbox aims to align with Spain’s National AI Strategy and is anticipated to serve as a model for other EU member states. Other member states, such as Germany, are working on regulatory frameworks that would facilitate similar testing environments, and some countries are implementing sandboxes specifically for sectors like mobility, public procurement, and healthcare. The EU Commission also provides guidance and support to standardize these efforts across the region, aiming to accelerate safe AI development through regulated testing environments.
Full compliance to the EU AI Act will be mandatory by August 2, 2026, with AI systems meeting the requirements, the sandboxes established, and the notified bodies designated and fully operational.
RELATED: Buyer’s Guide: Selecting a Requirements Management and Traceability Solution for Medical Device & Life Sciences
Consequences Of Non-Compliance
The EU AI Act introduces a system of sanctions and remedies for non-compliance or infringement of the regulation. It empowers national authorities to impose administrative fines of up to 4% of the annual worldwide turnover of the provider, user, importer, or distributor, depending on the severity and duration of the breach. It also allows national authorities to order the withdrawal, recall, or modification of non-compliant AI systems, as well as to suspend or prohibit their use or supply. In addition, it grants the right to compensation to individuals or organizations that suffer harm or damage as a result of non-compliant AI systems.
Pitfalls To Avoid
From a high-level perspective, the requirements seem similar to the general requirements for medical devices, like QMS, economic operators, and notified bodies.
However, there are some things to watch out for. For example, the notified body designated for medical devices may not be designated for the AI Act, resulting in device manufacturers having to deal with two different notified bodies and inspections for the same product. Two notified bodies’ numbers will have to be referenced on the label and in the declaration of conformity.
Medical device manufacturers that are using AI as part of their device or device software should be assessing the AI Act requirements and ensuring they are built into their QMS, as the due date is less than a year out. Having a QMS compliant to ISO 13485 that satisfies medical device regulations is not sufficient. The QMS must cover cybersecurity requirements as well for high-risk AI systems to protect against tampering, data breaches and other security risks.
Conclusion
Although the requirements in the AI Act seem very similar to what is already required for medical devices, like having a QMS, control over economic operators, post-market surveillance, and conformity assessment with a notified body, compliance with EU medical devices regulations is not sufficient to meet the AI Act requirements.
Manufacturers should be aware and start their compliance journey sooner rather than later to meet the various due dates of August 2025 and 2026.
- How The EU AI Act Impacts Medical Device Manufacturers - December 3, 2024
- Fueling Progress: Solutions to the Biggest Challenges Slowing Oil & Gas Projects - November 26, 2024
- Integrate DoD MIL-STD-882E Risk Management with Systems Engineering Using Jama Connect® for Defense Systems - November 25, 2024