Artificial intelligence is increasingly part of everyday conversations about mental health — from casual sleep advice to disclosures of suicidal thoughts. What drives this? A lack of accessible care, stigma, and the temptation of anonymous, free consultations. But despite this growing role, LLMs (large language models) are far from being reliable therapy tools: they make mistakes, lose context, miss non-verbal cues, and often unintentionally reinforce distorted beliefs. There’s limited research on their performance in crisis scenarios, and regulators and developers continue to debate where the red line should be drawn.
We spoke with psychiatrist Uladzimir Pikirenia about whether AI can be trusted with psychological and psychiatric care, where it might be helpful, and where it poses risks—especially in conversations involving suicide. We also discussed what technical and legal safeguards might realistically emerge in the coming years.

2Digital: According to OpenAI, more than 1 million users per week discuss suicidal thoughts with ChatGPT — around 0.15% of the active user base. What does this tell us about unmet mental health needs? Where should AI’s role begin and end in these conversations?
Uladzimir: Based on the available data, language models like ChatGPT still have significant shortcomings that prevent them from being viewed as reliable tools for helping people. In fact, in many situations, such advice may lead to the opposite — negative outcomes.
Despite the growing capabilities of LLMs, I believe most experienced users know that after a prolonged dialogue, AI tends to start “bugging out,” repeating the same ideas and eventually producing a meaningless jumble of words that only mimics coherent speech.
It’s true that therapists and psychiatrists can also lose track of illness context, forget details, or miss relevant circumstances. But a human in that position will ask the patient for clarification — not hallucinate symptoms, arguments, or facts.
It’s also worth recalling the meta-analysis published in World Psychiatry, which compared various chatbot models and investigated how safe and effective they are. Among language models, there was a significant shortfall in evidence regarding their medical or near-medical usefulness for people in crisis.
That said, we should be objective: despite the drawbacks listed above, in certain circumstances and under certain conditions, advice from AI may be better than nothing. We can reasonably expect that the quality of such consultations will improve year by year. But for now, we are where we are.
2Digital: Could LLMs serve as assistants to psychologists or psychiatrists? Where are they actually helpful?
Uladzimir: Language models offer a major advantage in terms of accessibility and cost, which gives them enormous potential.
In countries with low or middle income levels, many people simply have no access to professional medical care. Theoretically, this gap can be filled by models like these. According to a comprehensive Lancet Commission estimate, about 75% of people with mental disorders cannot access the treatment they need.
From the “something is better than nothing” logic, well-trained chatbots (i.e., further developed models, not current versions) could help close this accessibility gap in both general medical and specifically psychiatric or psychotherapeutic care.
We’re speaking about the future here — because currently, it’s hard to evaluate whether these models are capable of providing quality care, or at least adequate support. Most of our current conclusions are based on impressions rather than robust research. And medicine, as a field, is grounded in concrete data and knowledge.
But when it comes to understanding which models have been tested, under what conditions, how safe they are, how well they detect delusions or suicidal thoughts, and how they handle them — we know virtually nothing.
2Digital: Talking to an AI about suicidal thoughts sounds especially risky. People are entrusting their lives to a black-box system…
Uladzimir: Indeed, this is a particularly complex issue. But with significant caveats, we can imagine a future where AI does play a role in such conversations.
Why? Because stigma around mental illness and suicidal ideation persists — even in developed countries. This means people will continue to seek some form of anonymous help and turn to LLMs again and again.
So we’ve already moved past the question of whether chatbots should comment on these topics. The real task now is figuring out how to make sure they can handle them effectively — because people with suicidal thoughts will continue to engage with them regardless.
Recall OpenAI’s own data: about 0.15% of weekly active users engage in conversations showing “clear signs of suicidal plans or intent.” Meanwhile, epidemiological studies suggest around 3% of people experience suicidal ideation in a given year and about 8.5% over a lifetime.
Given the rapidly growing user base of chatbots, it’s reasonable to expect a multiple-fold increase in such interactions in the near future.
2Digital: What do you think about the broader idea of “robotizing” psychological care?
Uladzimir: There are different viewpoints on this. On one hand, data on how people with suicidal thoughts behave has long been collected by medical science and structured into a vast, well-organized body of knowledge. We have a clear understanding of which words, in response to a request for help, reduce suicide risk, and which increase it. There are specialized questionnaires for identifying various mental states. All of this data can be used to train language models.
Despite the seeming uniqueness of every person, our psyche is structured quite similarly. Most cases can be described using existing experience, which means that, theoretically, language models can be trained on behavioral scenarios.
On the other hand, there are issues that remain extremely difficult to resolve: some things language models simply cannot perceive physically. For example, nonverbal reactions — a person might say one thing but behave completely differently. And that, in fact, is not uncommon.
Also, suicidal thoughts described in the same words may mean completely different things depending on a person’s comorbid conditions. A specialist might suspect those conditions just by seeing how someone walks into the office. A chatbot, however, has no way of knowing this, and lacking the full picture, it may construct hypotheses and give advice that leads in the “wrong direction.”
Partially, if we’re talking about verbal cues, this might be compensated in the future, once chatbots gain access to video and audio, can process it in real time, and integrate it into context. But as far as I understand, such functionality will remain largely unavailable in the near term.
2Digital: In interactions with AI, it always comes across as supportive, more inclined to agree than to argue. To what extent does this approach distort psychoeducational behavioral guidance, and can it reinforce the pathological dependency on chatbots that’s increasingly being discussed?
Uladzimir: I’d put it this way: the dependency here is probably similar to other behavioral addictions. It most likely develops in people who already have certain predispositions. There is indeed a problem of a “magnifying lens” — a dynamic of exclusively positive reinforcement, where language models, by their very architecture, are not designed to contradict, argue, or give negative feedback to the user. This can strengthen certain distorted perceptions of the world around them.
For example, someone might say to a chatbot: “People around me don’t understand me, no one cares, and most likely they’re plotting something against me.” And it’s highly unlikely that current AI would try to assess whether such statements match reality — simply because, at this point, there’s no way to objectively do that. As a result, the chatbot may end up reinforcing the user’s negative thoughts and, in doing so, contribute to their isolation and increase their suspicion toward others.
The problem here is that people with mental health difficulties often have a distorted ability to interpret signals coming from other people and from their environment. And a chatbot has no means of evaluating how truthful or adequate that interpretation is. Incidentally, this is precisely where the role of the psychotherapist is often critical — to act like a tuning fork, helping restore the person’s response to the outside world, and make it harmonious again.
2Digital: It’s a dangerous combination: on one hand, mental illnesses are still stigmatized, which means people will increasingly turn to chatbots in search of anonymous help. On the other hand, chatbots still lack the tools to provide adequate feedback. And this snowball keeps growing day by day.
Uladzimir: Yes, that’s more or less how it looks. On the positive side, I’d point out that we’re having this conversation right now. And this topic is becoming the subject of discussion more and more often. It’s being talked about increasingly — both by regulatory bodies and by the developers themselves.
As an example of a response, we can recall that OpenAI announced its GPT had stopped giving personalized medical advice. But I think that’s a weak form of protection, and there’s a good chance it will soon be rolled back in one form or another — because people will keep turning to chatbots with these kinds of issues anyway. As we said earlier, this process can no longer be stopped.
2Digital: Well said. Right now, we’re seeing how LLM developers are trying to solve the problem with minimal effort: introducing restrictions or inserting “non-medical use” disclaimers into user agreements. But that’s not a real solution…
Uladzimir: Of course. It’s about as effective as trying to forbid people from getting upset or falling ill. I believe that public and regulatory pressure on companies regarding this issue will continue to grow, and eventually, it will lead to more serious solutions.
What kind of solutions? It’s hard to say. For example, they might try introducing specific trigger words or phrases that would automatically switch the user from a general-purpose LLM model to a specialized one, trained to handle complex mental health conditions. I assume that technically, this could be implemented quite soon. The bigger question is how quickly such innovations will be rolled out — and how many losses will occur in the meantime.
Any company first tries to simply minimize its risks. But once it becomes clear that this isn’t enough, it starts looking for more radical ways to solve the problem. And considering that competition between LLMs is now intensifying, this too might become a driver of innovation, and even sales.
2Digital: In your opinion, over the next five years, what are the most realistic models for AI use in psychology and psychiatry?
Uladzimir: I think development will move forward in several directions at once. The first is the use of language models and chatbots as personal secretaries and assistants to psychotherapists, psychiatrists — and physicians in general. We’re already seeing significant developments in this area.
The second direction is direct interaction with patients, where AI will learn to better recognize the presence of mental or psychological issues, suicidal thoughts, and other conditions that pose risks to a person’s health or the safety of others.
Gradually, these two directions will begin to merge. We’ll see contracts between national healthcare systems and the companies that run chatbots, so that, for example, when a user shows signs of severe conditions, including psychiatric ones, the chatbot can forward this data to emergency services or social support agencies. Naturally, this should happen ONLY with the prior consent of the individual — similar to how Apple Watch can call emergency services if someone is in an accident.
2Digital: There are already services claiming to be training AI in, let’s say, “medical behavior.” Could such services set the trend and successfully compete with market leaders?
Uladzimir: I think that sooner or later, these kinds of apps will end up merging with bigger players. For example, a dedicated medical module could be developed within OpenAI itself. It would be a very logical acquisition. And in this case, whoever manages to sell themselves faster will likely be the one who wins.

