The clinical signal that often goes unmeasured
Hospital nurses spend entire shifts moving between medication rounds, vital-sign checks, documentation, family conversations, and constant small judgments about whether a patient looks stable or not. In that environment, one of the most important forms of expertise can also be one of the hardest to document: the sense that something is off before standard metrics clearly show it.
Kelly Gleason, an associate professor at the Johns Hopkins School of Nursing, has built research around that exact problem. Nurses, she argues, are trained to read people as well as monitors. They notice changes in appearance, alertness, behavior, or overall presentation that may not immediately trigger an alarm in a conventional hospital early-warning system. Yet without an objective way to communicate those concerns, a hunch can remain just that, even when it later proves accurate.
The result is a recurring and difficult scenario in acute care. A nurse feels lightly worried about a patient, but blood pressure, heart rate, and other standard indicators appear normal. Pulling a physician away from rounds without more concrete evidence can be hard to justify, and busy workflows leave little time to interrogate instinct in a structured way. Sometimes the next shift reveals that the patient has deteriorated and been transferred to intensive care.
Adding nursing judgment to machine learning systems
Gleason’s approach is not to replace existing hospital alerts, but to augment them. Hospitals already use early warning systems that process patient data across multiple shifts and generate risk scores. If a score crosses a threshold, the care team receives an alert. Increasingly, these systems use machine learning to improve predictions about which patients may be at risk of decline.
Those systems perform a useful safety-net role. They track a patient over time, maintain continuity across shift changes, and help clinicians avoid missing patterns in a busy ward. But they are still built primarily on documented data inputs, especially vital signs and other measurable factors. The challenge is that bedside nurses often detect concerning patterns before they can be cleanly reduced to numbers.
The Johns Hopkins work aims to bridge that gap by finding a way to quantify and incorporate those bedside observations into AI-supported warning systems. The idea is not mystical intuition translated directly into software. It is the structured capture of subtle clinical observations that experienced nurses repeatedly make and that may correlate with deterioration even when standard measures have not yet crossed a threshold.
Why this matters for patient safety
The value proposition is straightforward: if a patient’s decline can be recognized two hours earlier, outcomes may improve dramatically. Gleason describes cases where identifying deterioration sooner could have saved a life or significantly preserved quality of life. For nurses, those moments can linger because the concern was real, but not actionable enough in the moment to force escalation.
That is why this line of work matters beyond workflow optimization. It addresses a well-known blind spot in modern clinical systems: medicine measures what it can count, but not everything important is easily counted at first glance. Nurses are often the clinicians with the most continuous bedside exposure, which gives them access to signals that are both rich and hard to standardize.
If AI tools can help convert those observations into meaningful risk indicators, hospitals could gain an earlier and more nuanced layer of warning. That does not mean the machine becomes the authority over the nurse. If anything, it means the software becomes better at listening to frontline expertise rather than flattening care into a set of routine inputs.
The operational challenge
Translating this concept into usable practice is not simple. Hospitals already struggle with alert fatigue, documentation burden, and workflow overload. Any new system that asks nurses for more input has to prove that the extra effort produces real clinical value. A model that merely adds another checkbox or a stream of noisy warnings would fail the bedside test quickly.
That is what makes the framing of the Johns Hopkins effort important. The aim is to augment existing AI in early warning systems, not layer on a separate, disconnected tool. In practical terms, the best version of this approach would help nurses express concern efficiently, connect those concerns to broader patient data, and elevate cases where bedside impression and system-level patterns align.
The supplied report does not detail a finalized product or deployment outcome, so this remains a development story rather than a clinical adoption story. But it identifies a serious design question for health AI: can software capture the observational intelligence of frontline staff without reducing it to something so crude that its value disappears?
A more realistic vision of AI in hospitals
The strongest aspect of this research direction is that it treats AI as support infrastructure, not as a substitute for human care. In public discussion, health AI is often framed around automation or replacement. This project points somewhere more grounded. It starts from the premise that nurses already possess important predictive insight and asks how digital systems can make that insight legible sooner and more consistently.
That is a more credible path for clinical AI, especially in high-pressure environments where trust depends on whether a tool reflects the lived realities of care. Nurses do not need software that tells them to ignore what they see. They need systems that can strengthen their ability to escalate concerns before measurable instability becomes obvious.
If the work succeeds, its contribution may be as cultural as technical. It would formalize the idea that frontline judgment is not soft data sitting outside the hospital’s analytic machinery. It is part of the signal. And in a setting where minutes matter, turning that signal into earlier action could be one of the most valuable uses of AI at the bedside.
Key points
- Johns Hopkins researchers are exploring how to incorporate nurses’ bedside observations into AI early-warning systems.
- The goal is to detect patient deterioration earlier, even when standard vital signs still appear normal.
- Existing hospital warning tools already use machine learning, but they depend mainly on documented objective data.
- The work frames AI as a way to amplify frontline nursing expertise rather than replace it.
This article is based on reporting by Medical Xpress. Read the original article.
Originally published on medicalxpress.com







