High expectations, limited trust

A new Swedish survey points to a paradox at the center of AI adoption in medicine: people want the technology to be better than humans before they are fully willing to trust it. According to a study from the University of Gothenburg, both physicians and members of the public in Sweden expect AI systems used in health care to meet accuracy standards that exceed current human performance, particularly in serious clinical situations.

The result captures a hard truth for medical AI developers and health systems. In many industries, new software can be introduced when it is merely useful or somewhat better on cost or speed. In clinical care, the social threshold is different. People do not just want efficiency. They want a system that makes fewer dangerous mistakes than the professionals it may support or partially replace. At the same time, the survey found that trust in AI remains moderate rather than strong, suggesting that elevated expectations are arriving faster than confidence.

The study was based on a survey sent in spring 2025 to 1,000 randomly selected people in Sweden, divided evenly between physicians and members of the general public. The response rate was 45% among physicians and 31% among the public. Participants were asked to assess different health care scenarios and indicate what level of missed or misjudged cases would be acceptable from an AI system compared with current health care performance.

Why the standard rises when AI enters the room

One of the clearest findings was that expectations intensify in high-stakes situations. In cases such as chest pain, many members of the public wanted no cases to be missed. Physicians were more willing to accept a narrow margin of error, reflecting their practical understanding that screening and diagnosis always involve tradeoffs between false negatives and false positives.

That difference matters because it highlights a recurring problem in AI deployment debates. Accuracy is not a single number that settles the issue. A system can be tuned to miss fewer serious cases, but doing so may create many more false alarms. That, in turn, can trigger unnecessary testing, strain staff time, and expose patients to additional procedures. As researcher Rasmus Arvidsson noted in the study summary, a system that labels everyone as sick would avoid missing serious disease but would not be useful medicine.

The challenge, then, is not simply to make AI more sensitive. It is to decide what balance of errors is acceptable, for whom, and in which context. The survey suggests that the public and clinicians do not always begin from the same position. Many citizens appear to hold AI to a near-zero-error ideal in serious conditions, while physicians are more accustomed to operating inside clinical uncertainty.

That mismatch is likely to shape adoption. If patients expect near perfection while hospitals procure tools that offer only incremental gains, backlash is predictable. The study therefore supports a more explicit public discussion of tradeoffs rather than marketing AI as if it can eliminate them.

Use is spreading faster than confidence

The survey also found that many respondents were already using AI in some form, but relatively few expressed high trust in it. Among physicians, trust in chat-based AI tools was roughly similar to trust in AI systems already used to interpret ECGs. More than seven in ten physicians had tried chat-based tools, yet few were using them for clinical decision-making.

That pattern is revealing. Experimentation is widespread, but professional reliance remains limited. Clinicians are testing the tools, seeing their potential, and perhaps incorporating them informally for background tasks or idea generation, but they are not yet folding them deeply into the decisions that carry direct responsibility for patient outcomes.

Among the general public, about one in ten respondents reported using AI for health advice. That is notable even if trust remains moderate. It suggests that consumer-facing AI is already entering everyday health behavior, well before broad institutional consensus exists about where the technology should sit in formal care pathways.

The combination of moderate trust and meaningful use creates a transitional moment. AI is no longer hypothetical in health care, but it is not yet normalized as a dependable clinical authority either. For policymakers and providers, that middle stage may be the most delicate. People are exposed enough to form expectations, but not confident enough to accept mistakes that would be tolerated from human systems.

What the study does and does not show

  • Both physicians and the public in Sweden want AI in health care to be more accurate than humans.
  • Expectations are especially high in serious scenarios such as chest pain.
  • Trust in AI was moderate, with few respondents reporting high trust.
  • More than seven in ten physicians had tried chat-based AI tools, but few used them in clinical decisions.
  • About one in ten members of the public had used AI for health advice.

The authors note that the response rate is in line with similar studies, but also that it introduces uncertainty about how fully the results represent the broader population. Even so, the survey captures a dynamic likely to extend beyond Sweden. Medical AI is being judged against a standard that is not merely technical. It is social, ethical, and comparative. People are asking whether AI can outperform existing care, not simply whether it can function.

That distinction is likely to define the next phase of health AI. Systems that improve workflow but cannot clearly justify their error profile may struggle to win trust. Systems that can show measurable gains will still need transparent communication about what they miss, what they overcall, and how responsibility is shared between machine and clinician. The Swedish survey suggests the bar is already high. The harder finding for the industry may be that the public and doctors want that bar raised further before they are ready to depend on AI in medicine.

This article is based on reporting by Medical Xpress. Read the original article.

Originally published on medicalxpress.com