Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    2digital.news2digital.news
    Home»News»Most People Trust Doctors More Than AI — but See Huge Potential for Cancer Detection, New Study Shows
    News

    Most People Trust Doctors More Than AI — but See Huge Potential for Cancer Detection, New Study Shows

    December 8, 20253 Mins Read
    LinkedIn Twitter

    Even though we live in an era dominated by AI, public trust in the technology is still (fortunately) quite limited. The latest study presented at the Society for Risk Analysis annual meeting shows that people prefer to trust doctors over artificial intelligence when it comes to their own health. At the same time, many respondents express hope that AI can help detect and treat cancers—among the most serious health challenges facing society.

    The analysis was conducted by Dr. Michael Sobolev of the Schaeffer Institute for Public Policy & Government Service (University of Southern California) and Dr. Patrycja Sleboda, a psychologist and professor at Baruch College (City University of New York). The researchers examined levels of trust, understanding, excitement, and concern surrounding AI, focusing on one of the technology’s most promising applications: diagnosing cancer from medical images. They also assessed how attitudes toward AI varied by age, gender, and education.

    In the first part of the study, participants were asked whether they had heard of AI tools and whether they had used them. The results showed a clear correlation: people with firsthand experience of AI were more open to its medical applications and expressed higher levels of trust. While 55.1% had heard of ChatGPT but never used it, 20.9% were both familiar with the tool and had used it. Despite this, only about 17% of respondents said they trusted AI as much as they trust a doctor in the context of diagnosing health issues.

    The second survey presented respondents with a realistic scenario based on an actual system in development — an AI tool that analyzes digital images of the cervix to detect precancerous changes (known as automated visual evaluation). Participants rated five acceptance factors — understanding, trust, excitement, fear, and perceived potential — on a scale from 1 to 5. AI’s potential received the highest score, followed by excitement, then trust and understanding. Fear scored the lowest. The results suggest that exposure to a concrete, practical example of diagnostic AI significantly improved participants’ attitudes, reduced apprehension, and increased confidence in the medical use of such tools.

    Demographic analysis also revealed notable differences. Men and people with higher levels of education expressed more openness, trust, and enthusiasm toward the use of AI in healthcare, as well as lower levels of fear. According to the authors, this supports the idea that acceptance of AI is closely tied to technological familiarity and exposure to clear, practical examples of how it works.

    The researchers emphasize that the gap between general skepticism toward AI and the positive reaction to a real-world diagnostic example was surprisingly large. They conclude that education, transparency about how models function, and exposure to practical medical applications could play a key role in increasing patient trust in innovative technologies supporting healthcare in the years to come.

    Share. Twitter LinkedIn
    Avatar photo
    Mikolaj Laszkiewicz

    An experienced journalist and editor passionate about new technologies, computers, and scientific discoveries. He strives to bring a unique perspective to every topic. A law graduate.

    Related Posts

    News

    Teens Are Turning to AI Chatbots for Mental Health Support — Researchers Call It a “New Normal” and Warn of Risks

    December 9, 2025
    News

    The U.S. Is Considering Allowing Exports of Nvidia H200 Chips to China — A Potential Game-Changer for the Semiconductor Landscape

    December 9, 2025
    News

    Waymo Issues Software Recall After Series of Incidents Where Robotaxis Passed Stopped School Buses

    December 8, 2025
    Read more

    The FDA’s Elsa AI Explained: Has It Really Accelerated Drug and Device Approvals?

    December 2, 2025

    Why does AI lie and get lazy about answering your questions? We spoke with an LLM expert

    November 27, 2025

    Google’s Latest Quantum Milestone Hints at Practical Uses. So Where Does the Field Actually Stand?

    November 25, 2025
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    Demo
    X (Twitter) Instagram LinkedIn
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.