We’ve already reported that young Americans are increasingly seeking mental-health support from chatbots. This is not an exception but a growing trend — similar findings are now emerging from the UK, where a study by the Youth Endowment Fund (YEF) shows that AI is becoming a real (and concerning) option for discussing personal problems among young internet users.
According to the study, many teenagers describe chatbots as a “neutral listener” — one that doesn’t judge, shame, or react emotionally, and is available at any hour. This matters at a time when young people face rising academic pressure, relationship difficulties, increased stress, and a sense of isolation. For some, AI is becoming the first place they turn with their worries — before talking to family or friends.
Experts cited by The Guardian note that chatbots can help fill an important systemic gap. In many countries, waiting times for child psychologists or psychiatrists stretch for months, while private therapy is prohibitively expensive. AI won’t solve the mental-health crisis, but it can ease the burden by offering basic emotional support and helping users articulate their feelings. Researchers emphasize, however, that such tools must operate within clearly defined limits — they can support, but not diagnose or replace professional intervention.
At the same time, specialists warn that AI may offer guidance that is too general, imprecise, or poorly matched to an individual’s situation. In the most sensitive cases — such as depressive episodes or suicidal thoughts — chatbots may fail to detect warning signs. Privacy is another concern: young users often don’t know what data is stored or how it may be used. Experts therefore call for clear labeling of AI systems, better digital-literacy education, and transparent operational rules.
There is also the question of “emotional attachment” to chatbots. Some teens say they talk to AI daily, treating the system almost like a confidant. Psychologists note that while this may offer short-term relief, it should not replace social relationships — and relying too heavily on algorithmic support may deepen isolation if it’s not balanced with real human contact.
The rising popularity of AI-based support among teenagers shows that mental-health care systems must adapt to this new reality. Policymakers and clinicians need to determine how chatbots can be integrated in a way that leverages their benefits while minimizing risks. Completely eliminating them from teens’ lives is unlikely — and could even worsen mental-health outcomes.

