The scale of this phenomenon has already expanded well beyond early tech adopters. A newly published report by the West Health-Gallup Center on Healthcare in America shows that one in four adult Americans has tapped into AI for medical inquiries. Fortunately, more than half of this group doesn’t view algorithms as the ultimate oracle meant to eliminate contact with a specialist. Instead, AI serves as a digital assistant that patients consult before heading to the clinic or right after leaving the doctor’s office.
The collected data paints a detailed picture of the areas where US citizens are most eager to rely on algorithms. While everyday issues like diet and exercise (59%) and checking basic physical symptoms (58%) top the list, patients are increasingly trusting machines with far more complex tasks. Nearly half of the respondents (46%) use chatbots to analyze potential medication side effects, 44% ask them to “translate” and interpret complex information from medical records, and 38% independently verify their official diagnoses. Another notable trend is the search for support during emotional crises – almost one in four users of these digital advisors talks to machines about their mental health.
Americans see tangible psychological benefits in this approach. Nearly half of the respondents using AI for medical purposes admitted that a preliminary chat with a machine at home made them feel more confident during their subsequent consultation with a real doctor, making it easier to ask specific questions. Furthermore, some respondents claim that algorithmic prompts helped them spot developing health issues earlier or avoid unnecessary clinical tests.
However, this market and social enthusiasm is colliding with heavy skepticism from medical safety experts. Placing mass trust in digital “doctors” raises serious red flags, as large language models still exhibit a massive tendency to hallucinate.
As a separate industry analysis published around the same time highlighted, popular chatbots continue to confidently churn out misinformation. In a study verifying their effectiveness, a staggering half of the generated medical responses proved to be problematic, inaccurate, or entirely wrong to varying degrees. Patients today may feel better informed, but a significant portion of the “knowledge” they bring into the doctor’s office might be built on a foundation of software-generated falsehoods.

