Language models are widely capable of answering health-related questions — a topic that the media have covered extensively. These are truly remarkable advancements. However, the process of implementing them into real life rarely draws the same attention; it is accompanied by tedious paperwork and annoying safety issues.
In fact, the first CDSS was certified just one and a half years ago.
We spoke with Prof. Heinz Wiendl, Director of the Department of Neurology at the University of Freiburg, President of the International Society of Neuroimmunology, and co-founder and Chief Medical Advisor of Prof. Valmed — the first AI-assisted, LLM-enabled tool for clinical decision support to receive CE certification as a medical device (class IIb).

2Digital: Two years ago, it felt like a high time for healthcare startups to seize market opportunities — the AI boom in particular enabled smaller teams to tackle real healthcare pain points. How would you estimate the current market climate for startups in January 2026?
Prof. Wiendl: This was prime time, but now I don’t think it’s really less startups that are ramping up because the race is not yet over. I would say we are still in some sort of a high expectation and high generation climate. Two years ago, everything was brand new and everybody thought that one could be at the forefront. Things have on one side become more realistic, meaning a lot of startups realize that it takes simply time to do it properly. And: it takes time to get to the market. But on the other hand, the time to build up an AI-assisted tool has even shortened. Plus, there is an enormous need and pressure in many aspects of the health system. I would say it’s kind of an interesting calibration that has happened from the beginning.
2Digital: A year ago you made headlines when Prof. Valmed became the first clinical decision support system to receive CE certification as a medical device. From today’s perspective, what lasting competitive advantage has it provided?
Prof. Wiendl: We didn’t start from the standpoint that we could compete with the latest AI generation. What we thought is that we have to build something that is capable of not only solving a huge and increasing problem, but that is safe, complies with liabilities, rules and regulations, and: can be integrated in existing workflows. This is a very different approach.

The other aspect was that we, as a founding team, were a medical doctor — me — and a lawyer. So we basically came out of the problem, knew about the liabilities and legal requirements. This is not the typical tech startup, where guys that have just left university know what they are able to do with technology. From the beginning we were thinking about MDR and solving a problem that should transform how the health system works. This is why we teamed up.
My partner, our CEO, is a lawyer having quite some industry expertise with medical legalities. The idea was that we have to build trust first, because even if there is a technical possibility to do things, this doesn’t automatically mean that you’re allowed to use them. So far, we still have a mixture and therefore uniqueness that very few other companies have, including agility and the USP of still the only CE certified product of such kind on the market.
I don’t doubt technical or intellectual capabilities, or financial resources, but the difference we made is we went really the hard way. We teamed up with a notified body very early and basically generated a pathway that at that time did not exist. The regulations for medicinal products are more tailored to lasers and defibrillators – such kinds of medical devices. Basically we are in the same box, but all the templates, the thinking – they were not compatible. We built a very strong dialogue and created a new way. So far, this is still our Unique Selling Proposition — the CE certification.
The other USP is that we’re clearly positioned as a liable clinical decision support system. We aimed for all fields of medicine and succeeded, which is an enormous achievement and opportunity (but also responsibility). Often models focus on one particular area – neurology, internal medicine or dermatology. Our’s is broad and we went directly for Class IIb to be able to modify the system. In a fast-moving field we have to be in the position to change things, to update our database, and this is exactly what Class IIb allows.
The last thing that differentiates us from many other players in the field is our concept of usage. While we’re having a B2C model for individual healthcare professionals, the more interesting element is that we’re aimed to operate as an integrated (decision support) layer in various workflows, and that is doable via an API that we’re offering. This is a one-stop shop for many institutions and customers, particularly hospitals that don’t want to change the system, they don’t want to jump between modalities. Means: we’re teaming up with infrastructures that already exist.
2Digital: What you’ve said in the very beginning — not the typical tech teams who invite healthcare professionals for consultancy. You are also the customer who will use this product themselves.
Prof. Wiendl: That’s totally right.
That comes with one additional aspect that I would not underestimate — there is a network of healthcare professionals we were able to talk to, understand, and exploit. You really have the skin in the game.
The discussions about the clinical evaluation plans were much easier. I myself have a lot of experience with clinical trials, being on steering committees very often, while I would not have been able to build the tool from a technical perspective, but this has been done by partners.
The critical point was to convince the notified body that this is a safe system, and to find arguments how the system would be explainable as a medical device. We have a good narrative and know where the burning pain points are.

2Digital: But it took a lot of time. Isn’t that putting you in a situation where you are losing momentum, spending too much time and money while undergoing the certification process?
Prof. Wiendl: I think we were extremely fast. We started our company in June 2023 and got the CE about one and a half years ago, which is in terms of CE certifications particularly extremely fast. So I would say we found a good break-even between losing time versus having a unique proposition that none of the others yet has. I can’t really tell whether the math in the end will be in favor of our solution. I can only tell you that particularly in the EU and increasingly also outside, a lot of customers, particularly hospitals with liability, are exactly looking for such a solution. They are not just looking for any solution.
2Digital: What about post-marketing surveillance? We have so little information on how those systems survive after they enter the market.
Prof. Wiendl: We’ve got our first re-certification, so that’s another milestone that is positively announceable. This means that we have provided enough data and post-marketing surveillance structures and quality management that we can now say: “okay, we’ve not only got the initial approval, we also got the confirmation of an approval”. That it’s of course an additional burden with a lot of paperwork and a lot of questions.
In the first approval we could not answer in a sensible way all questions that the regulators had, so we postponed some of the answers into post-marketing surveillance. That was quite an intelligent recommendation by the notified body — they agreed that we will not be able to answer everything they would be interested in – e.g. benefit versus risk, versus all the other aspects, but answering it later became a post-marketing surveillance element. And so our process is quite extensive.
2Digital: You are not affiliated with any specific EHR manufacturer. Don’t you feel tempted to partner exclusively with someone?
Prof. Wiendl: In the beginning, we were looking for one big partner, basically assuming that this would help us with advancement further, because in the beginning we were a small company, we are still relatively small.
As a matter of fact we think that pursuing our transformative vision would benefit from a very strong partner and therefore I wouldn’t exclude it for the future because it clearly might make sense in the long run. For now we are agile and multi-partnering, we’re a bit of a “Swiss army knife” principle at this point. And that works quite well, so we consider it an advantage – not to be married with one partner and therefore possibly create fear of missing out by others.
2Digital: You’ve mentioned API — on one side, you can be a one-stop shop and smaller practices don’t need to integrate with something as big as Epic. On the other hand, APIs are the biggest attack surface now. Another system which you have to integrate like Lego probably also adds some security issues. How do you address that vulnerability?
Prof. Wiendl: Coming back to your initial point, I think if you look short-term, mid-term, long-term, I would personally favor mid-term to really partner with a big powerful player. You mentioned Epic. There are other big partners that have a lot of infrastructure heaviness, and they also have a lot of things that they can offer, which we have to deal with customer by customer. That’s indeed laborious.
For the moment, with our partners — I mean, the two biggest ones are medatixx and Telekom — big software providers in Germany and also outside, they have their own security systems, so the safety aspect, not the MDR aspect, is on their side. For the integrated solution, we refer that to the partners. For the standalone solution we have a lot of quality management in terms of cybersecurity.
2Digital: What about hallucination rates? How do you measure it? What’s the acceptable threshold? How is it monitored?
Prof. Wiendl: That was the critical point of our clinical evaluation plan and basically the endpoint also for our approval. We called it a “safety index”. This is one of the most crucial elements, and that was the most intense part of the discussions we had with the notified body — the safety of our product. What we established is an endpoint in our clinical evaluation plan, where the index included outputs of our system that would be, if executed, potentially dangerous. That includes hallucinations, but not only.
We have a safety index of 0.26% — a percentage of possibly dangerous outputs as part of all outputs. That is an extremely low rate of possible safety signals. This is something we have built our CE certification upon. Since then, we are relying on feedback systems in addition to our automated development benchmarking procedure.

If you’re asking about feedback on hallucinations from customers, so far we haven’t really had complaints that the system has malignantly hallucinated or given wrong recommendations. This is exactly what we are quite proud of. The downside is that we’ve tuned it initially so that in case there is no answer, it says: “No, I don’t have an answer.” We have basically tuned it towards giving no information rather than returning something which is wrong.
2Digital: What about that feedback? Is there a system for constant monitoring and adjusting to the situation in real time?
Prof. Wiendl: The productive system has a feedback loop where you can rate the answer from one to four and leave comments as to why the answer is good or not. The reality is that this is not used very often, very similar to how people use ChatGPT.
2Digital: Yeah, I almost never give feedback unless I’m completely angry with the return. Then I vigorously write claims. Almost never happens.
Prof. Wiendl: Exactly. So we have focused projects where we require feedback. This is probably the closest and best monitored feedback we have. But this is part of projects that we do as post-marketing projects with specific use cases..
2Digital: What do you think about FDA clearance or expansions to other markets like the UK?
Prof. Wiendl: Now we have EU-wide approval and are expanding into several regions outside of the EU – Africa, the Middle East, and Australia. We have staggered our expansion. And the US is planned for 2027 only, also because we feel the US market is very competitive. Our belief is that the advantage of being MDR, GDPR compliant is maybe less of an advantage in the US. So we would actually conquer the US once we have a really strong local partner with local strength and distribution power.
2Digital: The US market seems like the wild, wild west.
Prof. Wiendl: Totally wild, wild west. Open Evidence is often mentioned as a huge competitor. We believe it’s not a competitor because they have a completely different business model. They are medical information research. Officially they are not claiming that, but of course they’re used as such as decision support. And it’s for free. They’re not integrated, but their business model is they’re selling the data. This is very different from how we have started and want to continue. But you’re competing in the end against all of them, including traditional publishers that now offer AI-enabled tools on the basis of their publisher realm, also not as CE-certified CDS systems. .
2Digital: When OpenAI and Anthropic rolled out patient-facing models that are dedicated to health-related issues, without any kind of certification, it looks quite bold to say the least. Could Prof. Valmed develop a similar patient-facing AI tool without any certification?
Prof. Wiendl: We have for good reasons started with healthcare professionals as a clinical decision support system.
We have patient-facing projects running because there’s a huge unmet need. But we’re still struggling with the question of who is in the end liable if something goes wrong. This is why it is an extremely sensitive part. Probably the big techs have an army of lawyers that are already ramping up against any issues coming.
For us, it’s very likely that we’re doing this since it is obvious and a huge need, but again, we’re doing this with partners that are particularly providing patient-oriented platforms. They are obviously solving it with a lot of disclaimers, but I think this is a big gray zone because in a way you’re replacing professional advice with a chatbot. I see this as extremely difficult. I would assume in Europe it wouldn’t be possible.
2Digital: The responsibility is offloaded to the final user.
Prof. Wiendl: Many people ask: “well, isn’t ChatGPT or Gemini doing the same?” Or “is ChatGPT not good enough?” Meanwhile, there are more and more cases reported where the hallucinations and strong recommendations caused real trouble including suicidal actions. In certain hospital environments, it is simply forbidden to be used and the term is “shadow AI”. Our impression is that this is good for liable, explainable AI solutions such as our tool. We feel that many “deciders” exactly want and need this. . So we are seeing rather a better acknowledgement of reliable and approved tools.
What is very important for us — to raise awareness of the huge potential, but also the dangers that are associated with LLMs. This is coming back to the very beginning. We didn’t create the tool just because it’s doable. We want to establish a tool that can be reliably integrated into the health system to transform it. We believe in what we call a “valmedilization of the world” – decisions should be cross-checked, quality checked with reliable means and give standardized recommendations. That is actually our ultimate vision of Valmed. That’s what I would also expect as a patient.

