In an era of rapidly advancing artificial intelligence and easily accessible chatbots, it’s no surprise that young people increasingly bring their problems to these tools. Unfortunately, the latest research shows that they often do this instead of consulting mental-health professionals — a trend that may pose real risks to their well-being.
The study, published in JAMA Network Open, was conducted between February and March 2025 on a sample of 1,058 individuals from the RAND and Ipsos panels, making it one of the first representative U.S. studies on youth use of generative AI for mental-health support. The findings suggest that the main reasons for turning to AI are low or zero cost, 24/7 availability, and anonymity — factors that make chatbots a realistic alternative to traditional therapy, especially for groups with limited access to care.
The study highlights that 13.1% of all young people aged 12–21 use generative AI in the context of mental health — making the phenomenon far more widespread than previously assumed. According to the authors, at this scale, AI is becoming one of the most accessible sources of emotional support for this age group, especially given that 92.7% of users consider the responses helpful, and more than two-thirds return to chatbots regularly. The 13.1% figure is also significant because it includes both teenagers and young adults, showing that the use of AI is not marginal but is gradually becoming part of everyday coping strategies for dealing with stress.
The data reveal clear demographic differences – young adults aged 18–21 are almost four times more likely to use AI for emotional support (adjusted odds ratio — aOR = 3.99) than adolescents aged 12–17. Additionally, Black respondents were significantly less likely to rate AI advice as helpful compared with non-Hispanic white respondents (aOR = 0.15), which may point to issues with the cultural competence of AI systems.
We’ve also asked Jakub Łaszkiewicz, MA graduate in psychology, for a comment on this matter.
“While AI chatbots might be helpful for diagnostic or educational purposes, they are not a valid substitute of psychotherapy. Indeed, considering the limited availability of psychological services to groups with lower incomes, AI can successfully help identify a problem, but due to its nature cannot assist in solving one, since it lacks the knowledge and competence of a human specialist. It is worth to keep in mind recent reports about AI-inspired suicides among adolescents. Like with other applications, language models could be used in therapy, provided they are designed specifically for this purpose and scientifically proven to be efficient.” – stated Łaszkiewicz
A review published in JMIR Mental Health also shows that young people use AI in diverse ways — most commonly for diagnostic purposes (e.g., identifying suicide risk or autism spectrum disorders), but also for symptom tracking, treatment planning, and predicting the progression of mental-health conditions.
Dr. Lance Eliot of Forbes, author of the JAMA commentary, warns that while AI can indeed provide support, there are still no uniform standards for evaluating the quality of these interactions. It’s unclear what data the systems were trained on, raising concerns about the accuracy and safety of the advice provided.
In light of these challenges, regulators may need to step in. AI systems used in mental-health contexts should be certified for safety and effectiveness and designed with input from diverse user groups to ensure they align with users’ needs and values. Education is equally essential — both for users, so they understand the limitations and risks of interacting with AI, and for developers, so they build systems with greater transparency, accountability, and cultural awareness.

