Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»Over One Million Users a Week Share Suicidal Thoughts with ChatGPT
    News

    Over One Million Users a Week Share Suicidal Thoughts with ChatGPT

    October 28, 20252 Mins Read
    LinkedIn Twitter Threads

    OpenAI has released data highlighting the emotional dependence many users are developing on ChatGPT. According to the company, approximately 0.15% of weekly active users — which, based on an estimated 800 million weekly users, amounts to about 1.2 million people — engage in conversations showing “clear signs of suicidal planning or intent.”

    At the same time, around 0.07% — roughly 560,000 users per week — may display signs of mania, psychosis, or other acute mental health crises.

    In response, OpenAI emphasized that ChatGPT is not a substitute for professional mental health care. The company stated that it has worked with over 170 clinicians worldwide to improve the model’s behavior, including adding reminders to take breaks and connecting users with crisis helplines when necessary.

    These revelations come amid growing interest in AI tools for mental health — not only in diagnostics but also in user interaction, emotional support, and digital therapy. For instance, studies on large language models (LLMs) have shown they can sometimes outperform traditional methods in predicting suicide risk from crisis hotline transcripts.

    Other research describes automated systems capable of detecting suicidal tendencies from social media posts with up to 93% accuracy. This trend indicates that users increasingly view AI tools as sources of emotional support, opening new opportunities — but also serious risks if used uncritically.

    While the figures are alarming, OpenAI cautioned that the data remain preliminary and difficult to measure precisely — for example, “conversations showing signs” do not necessarily confirm suicidal intent, and the company has not disclosed the specific detection methods it uses.

    Experts warn that turning to a chatbot instead of a person may signal social isolation or a deteriorating mental state. Chatbots — though available 24/7 — cannot replace human empathy, professional therapy, or social support. Despite their sophistication, they can still contain dark patterns or unintentionally reinforce harmful thought patterns that a skilled psychologist or psychiatrist would immediately recognize.

    Share. Twitter LinkedIn Threads
    Avatar photo
    Mikolaj Laszkiewicz

    Related Posts

    News

    YouTube blocks background playback in mobile browsers — users must switch to the app or Premium

    February 3, 2026
    News

    Texas begins releasing “glowing” flies to stop the dangerous screwworm from entering the United States

    February 3, 2026
    News

    AI in mammography reduces detection of advanced breast cancer – results from the first randomized controlled trial

    February 2, 2026
    Read more

    “One can use AI to predict hurricanes, cyberattacks, and disease, but not financial panics.” We spoke with an economist about where AI can actually help them

    January 21, 2026

    The Health Data Gold Rush. How OpenAI and Anthropic Are Competing for Medical Records

    January 20, 2026

    GPUs, Budgets, and API Grey Zones: The Hidden Cost of External Models in Pharma

    January 16, 2026
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    • Threads
    Demo
    X (Twitter) Instagram Threads LinkedIn
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.