AI adoption is rising, but confidence is not
Artificial intelligence is becoming harder to avoid in everyday life. It is showing up in workplaces, consumer tools, and increasingly in medical settings. But a new survey commissioned by The Ohio State University Wexner Medical Center suggests that wider visibility is not automatically producing wider confidence. According to the source report, public trust in AI in health care is slipping even as use of the technology grows.
That tension captures one of the central problems in the next phase of medical AI. The challenge is no longer only whether health systems can deploy these tools. It is whether patients and the broader public will believe they are being used in ways that are safe, appropriate, and worthy of confidence.
Why declining trust matters more in medicine
Trust carries unusual weight in health care because the context is unusually personal. A person might tolerate algorithmic suggestions in shopping, entertainment, or office software with little emotional investment. Health care is different. The stakes include diagnosis, treatment decisions, privacy, and the basic sense that a clinician is acting in a patient’s best interest.
That is why even a modest decline in confidence can matter. If people become more skeptical of AI in medical care, the effects could spread well beyond public opinion polling. Patients may hesitate to accept AI-supported recommendations, question the legitimacy of digital triage or automated guidance, and grow more wary of how their data is being used. In a sector that depends on consent and credibility, trust is not a side issue. It is part of the operating environment.
The survey finding is also notable because it arrives during a period when AI is often presented as inevitable. Hospitals, startups, and technology firms have been moving quickly to position AI as a tool for efficiency, clinical support, and broader system modernization. But inevitability in deployment does not mean inevitability in public acceptance.
Visibility can increase scrutiny
One reason trust may slip even as adoption grows is that familiarity does not always produce reassurance. Sometimes it produces concern. As AI becomes more visible in doctor’s offices and health systems, the public has more reason to ask difficult questions: What exactly is the tool doing? Who is accountable if it is wrong? Is it supporting a clinician’s judgment or quietly substituting for it?
The supplied report does not provide detailed breakdowns of the survey responses, but its framing is revealing. AI is described as being present in jobs, homes, and medical settings, and the headline conclusion is that public trust in health-care use is declining. That suggests a gap between presence and legitimacy. People may increasingly encounter AI, yet still remain unconvinced that its use in care settings is beneficial or properly controlled.
This is a familiar pattern in technology adoption. Public skepticism often grows precisely when a technology moves from abstract promise into real-world decision-making. In medicine, that transition is especially sensitive because the public expects high standards of evidence, oversight, and human accountability.
The communication problem around medical AI
Health-care organizations may also be facing a communication challenge. AI can be introduced as a technical upgrade, but patients tend to evaluate it in human terms. They want to know whether it changes the quality of care, whether it affects the role of clinicians, and whether it handles sensitive information responsibly.
If those questions are left unanswered, trust can erode even before a patient directly experiences harm. In other words, skepticism does not require a dramatic failure. It can emerge from opacity, overstatement, or the impression that institutions are moving faster than the public was prepared to authorize.
The survey’s framing points to exactly that kind of atmosphere. AI is spreading. People know it. But recognition alone is not producing confidence. That should be a warning to medical systems that have focused heavily on capability and not enough on explainability, governance, and patient-facing clarity.
What the finding signals for hospitals and clinicians
For hospitals and clinicians, the practical lesson is that technical deployment cannot be separated from social acceptance. A tool may improve workflow or offer decision support, but its value is constrained if patients distrust the setting in which it is used. That is especially true when AI touches diagnosis, communication, documentation, or treatment planning.
Clinicians may end up carrying much of the burden of translating these systems for patients. Even when a tool is built or procured elsewhere, the doctor’s office is where people often confront its use most directly. If trust is slipping, front-line professionals may need to spend more time explaining when AI is being used, what role it plays, and where human judgment remains central.
Institutions, meanwhile, may need to recalibrate how they talk about AI altogether. Marketing language about transformation and efficiency can sound disconnected from what patients actually want to hear. In health care, the more credible message may be narrower: what the system does, what it does not do, who remains responsible, and how patient interests are protected.
A pivotal moment for medical AI legitimacy
The survey commissioned by Ohio State University Wexner Medical Center does not settle the long-term future of AI in health care, but it does identify a fault line. Adoption and trust are not moving in lockstep. The public may be seeing more AI while feeling less certain about its place in medical care.
That matters because the next chapter of medical AI will depend as much on legitimacy as on performance. Tools can be installed quickly. Confidence takes longer. And once lost, it is harder to rebuild than to claim in advance.
If health systems want AI to become part of routine care, they will need to treat trust as something to be earned rather than assumed. The current survey result suggests that this work is becoming more urgent. AI may already be in the room. The harder question is whether the public wants it there under the terms now being offered.
This article is based on reporting by Medical Xpress. Read the original article.




