Medicine is one of the most promising fields where AI is already delivering remarkable results — in radiology, for instance, or during patient intake. At the same time, medicine is a quantitative science built on protocols and strict, evidence-backed rules. So the temptation to train AI to think like a doctor — and eventually replace one is understandable. But how feasible is that?
On April 22, OpenAI rolled out a version of ChatGPT designed specifically for clinicians, saying it aims to support health professionals in tasks such as documentation and medical research. The developers are confident their product will help physicians focus on delivering high-quality patient care. For now, the launch is limited to the US market — European regulation is considerably stricter.
Will ChatGPT make its way into the EU — and is there appetite for it there? We reached out to experts to weigh in on how ready the medical community actually is for this kind of support.
Here’s the take from Uladzimir Svirkoū, MD, CEO of OKDOC eHealth Solutions GmbH — a practitioner who works extensively with LLMs:
— Clinics are scared to death of GPT. The moment you bring it up in the context of accessing real clinical field data, people practically start throwing holy water at you and performing an exorcism.
In Europe, regulations like GDPR, the AI Act, and MDR have spooked clinics and legal departments so thoroughly that even when a vendor says “all data is stored and processed on servers within the EU,” that’s no longer enough for a clinic to hand over access to real patient data. That means GPT would have to change substantially and adapt significantly to break into the clinical solutions market — not just operate as a general-purpose consumer product.
I don’t think the OpenAI team is genuinely interested in wading into these regulations and clinical trials — it’s an enormously complex process that would drag on for years. It’s not their business model, not their philosophy. It’s a niche market. OpenAI is more likely to expand in this space by snapping up projects that are already taking shape in AI and medicine than by building out a competitive B2B or enterprise product from scratch.
Europe’s draconian regulations are an iron fence for American cowboys. Sure, they can get off their horses, put on a clean shirt, knot a tie, and tick every box — but even then, customization and source data quality are the best friends of the small but bold.
Having a powerful LLM simply isn’t enough. You need a high-quality dataset — carefully prepared, scrubbed of noise and junk. You need infrastructure for embedding the technology into existing EHR systems and clinical workflows, proven clinical and technical safety, logging, explainability and transparency of the LLM’s reasoning path (GPT falls well short here), data protection, and the ability to isolate and delete a patient’s data on request — even if it’s already been encoded into vectors and the model has trained on it (unlearning).
We’d be happy to feature more perspectives on this topic. Write to our editorial team at d.korsak@andersenlab.com — we’d love to discuss it.

