EU Artificial Intelligence Act 2024/1689 and the Medical Device Industry

Aligning AI Innovation with EU Regulation

EU Artificial Intelligence Act 2024/1689 and the Medical Device Industry

Artificial intelligence is revolutionising healthcare, from image analysis to design. For manufacturers competing in the global market, the main challenges for any device are building the technology itself, and the approvals process. In the European Union, this has meant making sure devices were MDR (medical device regulations) compliant. The EU Artificial Intelligence Act 2024/1689 adds an additional layer of regulation. While the MDR addresses clinical safety and effectiveness, the purpose of the AI Act is to allow the development of trustworthy AI while simultaneously ensuring the health, safety, and fundamental rights of people across the European Union.

The AI Act is the first comprehensive legislative framework for regulating artificial intelligence within the European Union. The Act was endorsed by all 27 European Union member states in February of 2024, and came into force on the first of August, 2024. Provisions for high risk devices came into effect in February of 2025, and transparency requirements for limited risk devices will be enforced starting in August of 2026.

Risk Categories

The Artificial Intelligence Act divides devices into four risk categories: minimal risk, limited risk, high risk and unacceptable risk. Most, but not all medical devices that incorporate AI are classified as high risk due to their direct impact on health and safety.

Minimal risk applications pose a negligible risk to health, safety, and fundamental rights. These include things like spam filters, and are not regulated under the AI Act.

The limited risk category includes applications that pose some risk, but that risk is covered by transparency obligations. Examples of limited risk applications include AI-generated content and chatbots that interact with users.

The high risk category includes devices that:

  • Serve as a safety component of a product covered by EU harmonization laws, including the MDR, which require third-party conformity assessment. This includes all medical AI products classified as MDR risk class IIa or higher. AI-assisted medial image diagnosis software would fall under this category.

  • Are listed in Annex III of the AI Act and pose significant risks to health, safety, or fundamental rights. Examples include systems intended for emotional recognition and emergency patient triage systems. The AI Act provides some exemptions, which are discussed below.

High risk devices are subject to strict requirements that incorporate risk management systems, third party conformity assessments, post-market monitoring, and transparency obligations.

Unacceptable risk, or prohibited applications present a clear threat to health, safety, and society, and are therefore prohibited. These include manipulative systems that influence behavior without consent, social scoring systems, real-time biometric identification systems in public spaces, and others, which will be described in more detail below.

General Purpose AI (GPAI) Models

There are additional classification types for General Purpose AI models. Article 3(63) of the AI Act defines General Purpose AI Models as:

“An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.”

Under the AI Act, GPAI models can be used as components of AI systems within any risk class. They can also be used on their own as standalone high risk AI systems. The classification of GPAI models depends on whether they present systemic risks, that is, risks specific to the capabilities of GPAI models, which have a significant impact on the EU market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole, that can be propagated at scale across the value chain.

GPAI models judged to have systemic risks are subject to additional compliance requirements related to model evaluation, risk management, and cybersecurity.

The AI act also treats GPAI models and AI systems differently in terms of supervision. Under the AI Act, AI systems should be supervised by national market surveillance authorities. On the other hand, GPAI models and systems should be supervised by a new, EU-level AI office.

Transparency Obligations

AI models and systems that interact with people, or which generate content such as text, images, or videos, are subject to additional transparency obligations, regardless of their risk category.

First, users must be informed that the AI model or system is employing artificial intelligence.

Output from the AI model or system must be both recognizable as artificially generated or manipulated, and must be machine readable. Applications that are subject to this provision include virtual health assistants and chatbots.

High risk AI systems must also provide information so that users can understand the systems’ functionality and limitations. For medical devices, this includes clearly labelling AI-driven features, documentation of decisionmaking processes, and user training to help to mitigate the risks associated with AI usage.

Prohibited Practices

To protect the health, safety, and fundamental rights of individuals, the AI Act specifically prohibits AI models and systems that engage in a set of unacceptable practices. These include but are not limited to:

  • Biometric categorisation

  • Real time remote biometric identification in public places

  • Social scoring

  • Purposeful manipulation

  • Purposeful deception

  • Risk assessments for criminal offences

  • Facial recognition

  • Emotional interference

  • Subliminal techniques

  • Techniques that exploit vulnerabilities

The AI Act sets out steep fines for violation: up to 35 million euros, or seven percent of a company’s annual turnover, whichever is greater.

Exemptions

Certain types of AI systems and models are exempt from the requirements of the AI Act. These include:

- AI systems that are developed and used solely for personal, nonprofessional activities, or for scientific research.

- AI systems released under free and open source licenses, provided they do not involve prohibited practices, are not classified as high risk, and are not subject to additional transparency obligations in the case that they interact directly with human users. If any of the open source systems are monetised, however, they’re subject to the same rules as closed source systems.

However, the latter type of AI system could be exempt from the requirements of high risk devices if the purpose of the device is:

- Performing a narrow procedural task

- Performing a preparatory task relevant to certain use cases

- Improving the outcome of a previously performed human activity

- Detecting decision patterns or deviations from previous decision patterns without the intent to replace or influence a previously performed human assessment without human review. 

This would prevent most AI systems that perform administrative tasks such as medical coding, structuring, or structured reporting from being classified as high risk.

The AI Act and the MDR: Discrepancies and Compliance

The AI Act and the MDR differ in a number of key places, and this can present challenges for medical device manufacturers. Careful consideration is needed in order to comply with both sets of regulations.

Risk Classification

Risk classifications under the AI Act and the MDR are different. Manufacturers will have to ensure that their products conform to the requirements of both classification systems.

The Treatment of Software Updates

The MDR requires recertification of a device for significant software modifications, while the AI Act allows for continuous learning AI systems to evolve post-market. It will be vital to establish clear guidelines determining when AI model modifications will require regulatory re-evaluation.

Bias Mitigation

Under the AI Act, manufacturers are required to identify and correct algorithmic biases to ensure that AI models do not disadvantage certain patient groups. The MDR, which addresses clinical safety and effectiveness, has no such provision.

Transparency Obligations

There is a similar discrepancy between the two bodies of regulation when it comes to transparency. The AI Act requires disclosure of AI decisionmaking logic, while the primary concern of the MDR is clinical validation. Therefore, an AI-driven device could conceivably meet the requirements of the MDR but not the AI Act.

Human Oversight

The AI act prioritizes human oversight of AI activities. The MDR has no such provision, focussing instead on clinical safety and effectiveness. It will be important for devices to meet both types of requirements.

Conformity Assessment
Both bodies of legislation require third-party conformity assessments, however the criteria of these assessments, and the process by which they’re undertaken, are different. To avoid duplication of efforts and a potential increase in administrative burden, companies should seek out notified bodies with the expertise to assess devices according to both sets of regulations.

Data Management Requirements

The MDR mandates robust data management for clinical evaluations.The AI Act adds additional data management requirements for algorithmic transparency, training data quality, bias mitigation, and others. Manufacturers will have to satisfy the requirements of both sets of regulations.

Expertise Gaps

In order to navigate the requirements of both regulatory frameworks, manufacturers will have to allocate resources for hiring compliance specialists and invest in updated technologies for monitoring and reporting.

Cost Implications

The cost of meeting the requirements of not one, but two regulatory frameworks may increase the costs of development and operations, potentially subsequently increasing the price and accessibility of AI-driven medical devices.