top of page
Tom Grawey, DO

Sick vs Not Sick as a Diagnostic Test

“But he looked so good!”

While this may be a great phrase to use when discussing a first date, it carries a very different meeting when getting a quality improvement note from your medical director about a patient you cared for on your last shift. Nothing particularly stood out during the transport of Mr. Jones, a Friday night regular who is often found intoxicated in the local park. You’re shocked to hear of his C-spine fracture and humerus fractures that went unnoticed. In reality, he hadn’t had a drink in 48 hours and his “intoxicated” appearance was from a subdural hematoma.

This mistake doesn’t mean that you’re a bad paramedic. You used your gut, “sick vs not sick” and formed the general impression that Mr. Jones was “not sick.” While this is an immensely powerful piece in the clinician’s toolkit, you unknowingly used it incorrectly. This is no fault of your own. Unfortunately, medical education stresses the importance of “sick vs not sick” but fails to teach learners about the fallacies associated with it.

“Sick vs not sick” – A Free Diagnostic Test

“Sick vs not sick” is a diagnostic tool no different than a pulse oximeter. Part of utilizing any of these tools is understanding when the data presented to you cannot be trusted. There are two things that need to be considered with all tests:

1) Tool’s confidence in itself

2) Your confidence in the tool


In the case of a pulse ox, the waveform clues you in to the confidence the device has in the number being produced on the screen. Besides looking at the waveform, when you are interpreting a pulse ox, you need to be knowledgeable of patient’s current presentation. Consider someone who was just dragged out of a closed space fire. A pulse ox of 100% could be related to carbon monoxide poisoning and not representative of the blood’s oxygenation.

To consider “sick vs not sick” as a diagnostic test, we need to better understand how it works.

Your brain has two ways to process information and make every decision, no matter how large or small, that you come across in your day to day life. These are called system 1 and system 2 thinking. System 1 thinking is how you make the vast majority of decisions. It’s effortless, good at pattern recognition, and doesn’t utilize much energy. Your daily commute to work is a perfect example. Your brain goes on autopilot and without any focus on where you’re actually going you show up at work. Clinically, system 1 thinking is what you use to determine “sick vs not sick.” It is why when you hear the story of 45 year old diaphoretic diabetic male with crushing chest pain that radiates to the jaw and shoulder you reflexively yell out STEMI!

While system 1 thinking is crucial to many decisions we make on a daily basis, its speed and lack of effort make it a set up for errors. This is where system 2 thinking comes in. System 2 thinking is slow but deliberate. It is concentration and problem solving. When you’re on the way to work and new construction blocks off the main road, system 2 thinking jumps in to try to figure out an alternative route. Clinically, System 2 is needed to think through complex case presentations and run resuscitations. Given its decreased chances for error, you may wonder why we don’t just use system 2 thinking all the time. If you’ve ever felt mentally and emotionally exhausted after a long, complicated resuscitation you realize just how exhausting this would be.

The key to avoiding errors in calling someone “not sick” is recognizing when your system 1 thinking cannot be trusted. How can we evaluate the quality of data that our system 1 thinking is providing?


“Sick vs not sick” – Tool’s confidence in itself

The “sick vs not sick” equivalence of the pulse ox waveform is being aware of when your mind is fatigued and not making accurate decisions. This occurs when you are out of energy whether it’s physical, mental or emotional. Reflect on your mental clarity and decision making abilities on two identical calls, one an hour into your shift after a cup of coffee and the other on hour 23. When the mind is tired, it doesn’t want to use system 2 thinking and relies heavily on system 1. Subconsciously, your impression of “sick vs not sick” will have a much bigger impact on the way a patient is managed when you’re fatigued. This is a lot of weight being put on a decision that isn’t as accurate as it would be when you’re at peak performance.

While making sure you’re as well rested and fed as possible is important this is not always realistic. When you can’t fix your fatigue there are two steps to moving past it:

1) Admit you have a problem

When you’re on a call at 4am, start by admitting that you’re tired and more likely to make a mistake in clinical decision making. This will include not only how you consider a patient’s illness, but in every decision that needs to be made: How much of an assessment do I need to do? Should I get a 12 lead ECG? Should I interpret the 12 lead or just look at the monitor’s interpretation? The tendency when fatigued is to put too much weight on “not sick” and do less work evaluating and managing the patient based on your general impression.

2) Meet and exceed the standard of care

Force yourself to go through the same, if not extra, steps to compensate for your fatigued mind. What would you do for this patient if it was 9 am? If you are on the fence about a 12 lead or putting on a c-collar, you definitely should. Personally, my practice overnight includes more diagnostic tests and more thorough physical examinations to compensate for a fatigued mind.

By acknowledging that your “sick vs not sick” waveform is poor when you’re fatigued, you’re one step closer to avoiding error in your decision making process.


“Sick vs not sick” – Your confidence in the tool


Similar to how a pulse oximeter reading with a great waveform cannot be trusted in the setting of carbon monoxide poisoning, there are certain clinical situations where “sick vs not sick” tends to perform poorly. These are scenarios where the clinician is at risk for cognitive biases.


Errors in system 1 thinking present in the form of cognitive biases. These biases represent situations where we do a poor job of ignoring system 1 and utilizing system 2 thinking. There are hundreds of cognitive biases but thankfully only a couple apply to clinical decision making and even fewer are common drivers of error in “sick vs not sick.”

Anchoring Bias

Imagine you’re on the seaward journey toward diagnosis in your altered mental status patient and you arrive on scene and find her laying at the bottom of the staircase. Your initial thought is “oh, she fell down the stairs and has a head injury.” Those stairs are called your anchor. The key to anchoring is that you rely so much on this big, large, heavy piece of information (the anchor) that you inadvertently explain away other information that is significant and pertinent to reaching the correct diagnosis. The patient doesn’t have any evidence of trauma but you say, “well, all the trauma must be internal.” The patient has pinpoint pupils but you say “well, a large brain bleed could probably cause pinpoint pupils…right?” You see some pill bottles nearby but recognize that the patient’s mom looks chronically ill and assume they aren’t contributing to this presentation because this person is lying by the stairs. Like an anchor of a ship, the staircase prevents you from being pushed toward the correct diagnosis of overdose.

So what are some other common anchors in EMS?

-Fall from standing height -> This patient can’t have a C spine fracture

-4 lead ECG looks normal -> We don’t need a 12 lead

-Patient is young -> Don’t need to consider for life threatening causes of the symptoms

In “sick vs not sick” the problem is anchoring on “not sick,” causing you to write off other findings, like vital sign abnormalities, that point to something serious. If you’re reflecting on a case and catch yourself saying “but he looked so good!” or find the phrase “vitals were normal” in the narrative despite them being abnormal, chances are you’ve experienced anchoring. In order to do better, recognize the tendency to anchor on “not sick.” Patients that don’t look sick still need a full assessment and consideration of all the information presented to you.

Posterior Probability Bias

Step outside the real world and pretend that there is a global pandemic causing a significant uptake in the amount of EMS calls for shortness of breath. 50% of the runs you’ve been on this week have been for breathing difficulties and the past 6 patients you saw with this complaint were all suffering from COVID-19 infections by your assessment. No matter what the next shortness of breath patient looks like, you’re more likely to come up with a field diagnosis of “COVID-19” based on your experiences with the past 6 patients. Posterior probability bias says that you’re more likely to diagnose something in your recent memory than pull a new disease from the further depths of your brain.

While the global pandemic serves as an easy example, the potential for posterior probability bias causing us to miss a sick patient in a slew of non-sick individuals with similar complaints exists all around us:


-Lift assists/falls

-intoxicated individuals

-Hypoglycemic emergencies

These chief complaints typically present patients with non-life threatening illness and injury that are easily managed. Posterior probability bias clouds your judgement and makes it hard to find critical illness among many not critical patients. If you’re reviewing a case and the dispatch information or chief complaint is something common and typically not life threatening, you may have experienced posterior probability bias. To avoid this bias, acknowledge that each patient is an independent event requiring their own individual assessment and diagnosis. Realize that patients that fall in this category often look “not sick” and your tendency will be to under assess. Take a few minutes to gather a more thorough history and physical to assure yourself you aren’t missing anything.

Premature Closure Bias

Premature closure means coming up with a diagnosis before gathering enough information. It is similar to and sometimes confused with anchoring. In anchoring, one piece of information is held in such high regard that other information is explained away. In the case from earlier, the pill bottles and pinpoint pupils were identified but interpreted by the clinician to be not important due to the anchor of the staircase. In premature closure, since the diagnosis is already made, information gathering stops completely. Had this been a premature closure once the diagnosis of “fall down stairs” was established, pupils wouldn’t have been checked and the pill bottle wouldn’t have been identified because no one would have looked for them.

Posterior probability and anchoring can fuel premature closure. Some situations are commonly implicated in this bias:

-Alcohol intoxication

-Drug seeking behavior

-Anxiety attacks, mental health emergencies

-Acute exacerbations of chronic disease (CHF/COPD exacerbation as an etiology of dyspnea in patients who carry this chronic diagnosis)

While everyone has different documentation styles, a short narrative or poorly documented assessment are typical findings in premature closure – there wasn’t much information known so there wasn’t much to write. To avoid this error with “sick vs not sick” remember that this phrase is a tool and not a substitute for a field diagnosis. Patients who look “not sick” still require a full history and assessment to determine the etiology of their symptoms.

How can we do better?

If you look at how these biases relate to “sick vs not sick” it’s easy to identify common situations where EMS is at risk for making a mistake:

1) Common complaints that typically don’t produce critical patients

a. falls, young patients, simple hypoglycemia episodes

2) Situations where my history/assessment may be unreliable

a. Altered mental status, intoxication

3) Mental fatigue

A. Late into a shift, the call after a complicated resuscitation

These biases and the change in your decision making capabilities when tired are inescapable. No amount of studying, sleep or preparation can prevent you from being susceptible to these errors. Accept that you will experience these problems, recognize when you’re at risk, and modify your behavior to decrease the odds of error Personally, I know when I’m tired at 4am I need to be more deliberate in forcing myself to use system 2 thinking. I consciously become overly thorough in my assessments and perform more diagnostic tests to compensate for this risk of error.

Bottom line: “Sick vs not sick” is nothing more than a diagnostic tool

While I utilize “sick vs not sick” on a daily basis, I firmly believe that EMS training puts too much emphasis on this piece of the patient assessment. Our current method of instruction, where “sick vs not sick” serves as a summary statement of the primary survey, oversimplifies illness and sets the stage for anchoring and premature closure. While this may help EMTs with limited treatment capabilities remember to get off scene quickly, it’s inadvertently putting all “not sick” patients on an equal playing field. In reality, illness exists on a spectrum of severity and teaching providers to dichotomize patients can inappropriately influence decisions that are made throughout the duration of the encounter.



Thinking of “sick vs not sick” as a diagnostic test highlights that similar to the other tools in your toolbox it has limitations and flaws. It is merely one piece of information, just like a heart rate, blood sugar or pulse oximeter reading, that helps you determine a diagnosis. Utilize “sick vs not sick” to aid in your diagnosis of the patient, not to dictate the level of care you provide.


References:

1. Croskerry P. Becoming Less Wrong (and More Rational) in Clinical Decisionmaking. Annals of Emergency Medicine. 2020;75(2):218-220. doi:10.1016/j.annemergmed.2019.09.002

2. Hartigan S, Brooks M, Hartley S, Miller R, Santen S, Hemphill R. Review of the Basics of Cognitive Error in Emergency Medicine: Still No Easy Answers. WestJEM. 2020;21(6). doi:10.5811/westjem.2020.7.47832

3. Kahneman D. Thinking, Fast and Slow. Farrar, Straus and Giroux; 2011.

4. Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking. Academic Medicine. 2017;92(1):23-30. doi:10.1097/ACM.0000000000001421



bottom of page