Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    2digital.news2digital.news
    Home » What is the EU AI Act — and why should healthcare businesses pay close attention to it?
    Interviews

    What is the EU AI Act — and why should healthcare businesses pay close attention to it?

    August 29, 20256 Mins Read
    LinkedIn Twitter

    AI is transforming healthcare — but so are the risks. The European AI Act requires companies to ensure the safety of their solutions. We spoke with Ronnit Wilmersdörffer, Senior AI Policy Expert and Product Manager, and asked her to explain the essence of this landmark regulation.

    The EU AI Act is essentially a product safety regulation specific to applications of AI. There was a lot of debate about how to regulate this technology because you want innovation to flourish and at the same time it is known that things can go wrong when it’s not done well. The balance we’re aiming to strike is through a risk-based approach, and it’s quite analogous to other product safety regulations. 

    AI technologies are considered to have a different level of inherent risk to either health and safety or to fundamental rights. And it comes in four tiers:

    Unacceptable risk. 

    Refers to AI applications that, even when functioning as intended, violate fundamental values. Social scoring is a prime example, even if that works exactly as intended, we don’t want that kind of surveillance. This kind of applications will be prohibited.

    High risk

    The idea is that if applications don’t work as intended, people will be harmed. So it’s important to make sure that these have a robust quality management system. This covers existing safety regulated products, in particular things that might be covered under the MDR, applications for human resources, for patient triage. For example, during COVID, there weren’t enough ventilators, so who gets a ventilator? If that’s a decision outsourced to AI, then that is a high risk use case.

    Limited risk

    Those that interact with humans but do not pose significant safety or fundamental rights risks. These systems are not banned or tightly regulated like high-risk AI, but they must meet certain transparency obligations. If somebody is communicating with chatbots or being exposed to an AI system that evaluates emotions, people should know that they’re interacting with AI.

    Minimal risk

    Everything outside the high-risk and limited risk categories is subject to minimal regulation. Then basically it’s a free for all, developers are required to meet basic standards for the ethical use of AI.

    Then 2021 rolls around and chat GPT appears, and that doesn’t quite fit what we thought about, it’s a technology that you can use for anything. Depending on how you use it, it’s a different level of risk – the application is so manifold that you can’t reasonably regulate it. Thus, it’s important to keep two key points in mind:

    Some requirements depend on the specific use case of AI

    Others apply to the technology itself, if it’s a general-purpose model — such as LLMs or image diffusion models.

    A new and crucial criterion for any AI tool — especially within European healthcare — is the “safety component,” regulated under the EU AI Act.
    A “safety component” refers to an element of a product that is essential for ensuring the safety of the product and is subject to third-party conformity assessment procedures under EU harmonised legislation. Failure or malfunctioning of it endangers the health and safety of persons or property.

    If that applies to your product you will need to make changes to your operations to accommodate the requirements of the AI Act which starts by implementing a quality management system that is defined in Article 17.

    The AI Act distinguishes between the role of the provider and the deployer of an AI system. For those deploying high-risk systems, the burden is lighter — the primary obligations fall on the developer, who is also required to supply guidance on the system’s proper use.

    You can be both at the same time, or if you’re the deployer of a system and you do certain things, like off-label use, you can be considered the provider, and then you would be liable for all the requirements that the provider needs to take care of. 

    If you use an AI system and you slap your brand on it and communicate to the outside world that that’s your AI system, you thereby also take the responsibility for making that thing safe.

    If a company develops an AI system on your behalf, you are still considered the provider — effectively, the manufacturer — and bear the corresponding responsibilities.

    I would recommend starting with a strategic decision: do you want to become a provider of high-risk AI systems, or is that something you’d prefer to avoid altogether to sidestep the regulatory overhead? 

    If, as an organization, you truly value being in control of this, then you need to begin by establishing a quality management system in line with the AI Act.

    And regardless of whether you intend to develop high-risk AI systems or act as their provider, you must maintain a repository of all AI systems within your organization — because it’s not only about the high-risk ones. It’s also about AI systems that may carry transparency obligations. For instance, the moment you start using chatbots, you’ll likely fall into a category with legal requirements — even if the system isn’t classified as high-risk.

    What risk do healthcare companies face if they ignore or delay adopting the Act?

    If a company completely ignored the AI act and was unfortunate enough to start implementing forbidden use cases which, for example, would comprise the surveillance of employee emotional state in the workplace. If somebody thought that was a great idea. That’s forbidden. And if you do that, then the fines are up to 30 million euros or 7% of global annual turnover, whichever is higher. I wouldn’t recommend doing that.

    If it’s a high-risk application, and the deployer uses it in a way that they weren’t supposed to, or doesn’t train their staff appropriately. Then the fines are up to 15 million euros or 3% of global annual turnover, whichever is higher.

    If you’re a small or medium enterprise, then it’s the reverse, whichever is lower, but still a lot of money.

    And on top of that there’s a risk of reputational damage and liability claims if patients come to arms. But really, the AI Act is there to ensure quality and functioning systems in high-stake environments.

    What else should you know about complying with the AI Act?

    If your system is classified as high-risk, you will need to:

    Implement a quality management system that meets the requirements of the AI Act and other relevant regulations (such as the Medical Device Regulation, MDR).

    Adhere to harmonized standards approved by the European Commission — compliance with these standards will automatically serve as proof of conformity.

    Prepare technical documentation capturing all stages of your AI system’s development.

    In addition, you’ll need to register your AI system and apply the CE marking.

    Yes, it may sound complex — but this is precisely how safety and trust in medical technologies are built.

    Share. Twitter LinkedIn

    Related Posts

    Interviews

    Scaling Medication-Adherence Outreach without EMR Integrations

    October 7, 2025
    Interviews

    A Strategic Look at the Future of Radiology AI With xAID – copilot

    October 1, 2025
    Interviews

    Radioisotopes vs Cancer: Why Theranostics Has Become a Breakthrough of Recent Years

    September 24, 2025
    Read more

    Radioisotopes vs Cancer: Why Theranostics Has Become a Breakthrough of Recent Years

    September 24, 2025

    Health data should never become informational clutter

    September 19, 2025

    The Human Side of Digital Health

    September 17, 2025
    • News
    • Analytics
    • Interviews
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.