Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    2digital.news2digital.news
    Home»News»Survey Finds AI Already Supports Teen Mental Health — 13% of Young Americans Turn to Chatbots During Crisis
    News

    Survey Finds AI Already Supports Teen Mental Health — 13% of Young Americans Turn to Chatbots During Crisis

    November 19, 20253 Mins Read
    LinkedIn Twitter

    In an era of rapidly advancing artificial intelligence and easily accessible chatbots, it’s no surprise that young people increasingly bring their problems to these tools. Unfortunately, the latest research shows that they often do this instead of consulting mental-health professionals — a trend that may pose real risks to their well-being.

    The study, published in JAMA Network Open, was conducted between February and March 2025 on a sample of 1,058 individuals from the RAND and Ipsos panels, making it one of the first representative U.S. studies on youth use of generative AI for mental-health support. The findings suggest that the main reasons for turning to AI are low or zero cost, 24/7 availability, and anonymity — factors that make chatbots a realistic alternative to traditional therapy, especially for groups with limited access to care.

    The study highlights that 13.1% of all young people aged 12–21 use generative AI in the context of mental health — making the phenomenon far more widespread than previously assumed. According to the authors, at this scale, AI is becoming one of the most accessible sources of emotional support for this age group, especially given that 92.7% of users consider the responses helpful, and more than two-thirds return to chatbots regularly. The 13.1% figure is also significant because it includes both teenagers and young adults, showing that the use of AI is not marginal but is gradually becoming part of everyday coping strategies for dealing with stress.

    The data reveal clear demographic differences – young adults aged 18–21 are almost four times more likely to use AI for emotional support (adjusted odds ratio — aOR = 3.99) than adolescents aged 12–17. Additionally, Black respondents were significantly less likely to rate AI advice as helpful compared with non-Hispanic white respondents (aOR = 0.15), which may point to issues with the cultural competence of AI systems.

    We’ve also asked Jakub Łaszkiewicz, MA graduate in psychology, for a comment on this matter.

    “While AI chatbots might be helpful for diagnostic or educational purposes, they are not a valid substitute of psychotherapy. Indeed, considering the limited availability of psychological services to groups with lower incomes, AI can successfully help identify a problem, but due to its nature cannot assist in solving one, since it lacks the knowledge and competence of a human specialist. It is worth to keep in mind recent reports about AI-inspired suicides among adolescents. Like with other applications, language models could be used in therapy, provided they are designed specifically for this purpose and scientifically proven to be efficient.” – stated Łaszkiewicz

    A review published in JMIR Mental Health also shows that young people use AI in diverse ways — most commonly for diagnostic purposes (e.g., identifying suicide risk or autism spectrum disorders), but also for symptom tracking, treatment planning, and predicting the progression of mental-health conditions.

    Dr. Lance Eliot of Forbes, author of the JAMA commentary, warns that while AI can indeed provide support, there are still no uniform standards for evaluating the quality of these interactions. It’s unclear what data the systems were trained on, raising concerns about the accuracy and safety of the advice provided.

    In light of these challenges, regulators may need to step in. AI systems used in mental-health contexts should be certified for safety and effectiveness and designed with input from diverse user groups to ensure they align with users’ needs and values. Education is equally essential — both for users, so they understand the limitations and risks of interacting with AI, and for developers, so they build systems with greater transparency, accountability, and cultural awareness.

    Share. Twitter LinkedIn
    Avatar photo
    Mikolaj Laszkiewicz

    An experienced journalist and editor passionate about new technologies, computers, and scientific discoveries. He strives to bring a unique perspective to every topic. A law graduate.

    Related Posts

    News

    UK government has a plan to end HIV transmission by 2030

    December 1, 2025
    News

    Online Black Friday sales in the US jumped 9.1% How AI helped generate another $11.8 billion

    December 1, 2025
    News

    The Internet Is Literally Getting Worse — Undersea Fiber Cables Are Deteriorating

    November 28, 2025
    Read more

    Medtech 2025: Key Trends and Industry Outlook

    November 20, 2025

    Does Healthcare Need Another HealthTech Startup?

    November 18, 2025

    By 2025, the “Quantum Internet” Had Moved Beyond University Campuses. What Should You Know about this Technology? Read This Short Explainer

    November 14, 2025
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    Demo
    X (Twitter) Instagram LinkedIn
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.