I’ve been attending clinical toxicology conferences since NACCT 1999 in La Jolla, California. It’s impossible to attend one of these meetings without hearing a speaker (or, after a lecture, someone at the microphone with his own little speech cleverly disguised as a question) utter the words “There’s no evidence …”, usually in relation to a treatment we might or might not deploy for a poisoned patient. (I say “his” because this person is almost always a man. And he is almost always wearing Dockers®).
It’s an unfortunate fact that most of what we do for patients in toxicology is not based on great evidence. Often, we deploy an intervention because we think it might help the patient, not because we know it will. This is true, for example, every time you give activated charcoal (insert gratuitous self-citation here1) or dialyze a patient with salicylate poisoning (and another2). But there’s a difference between not having strong evidence and having no evidence at all. As a card-carrying member of the Clinical Epidemiology Party of Canada (not a real thing), let me say this as clearly as possible: “There’s no evidence” is the laziest phrase in clinical toxicology, maybe even in all of medicine.
There’s almost always some evidence.
Randomized controlled trials
Sometimes, “no evidence” is code for “no randomized controlled trials.” This is because we’ve all had drilled into our heads an evidence pyramid, more or less like this one, with RCTs and systematic reviews at the top:
RCTs are at the top for a reason. They’re the best way of determining whether an intervention can work, at least in the quasi-ideal fantasy world of RCTLand. In their simplest construction, when two groups of patients are more or less equal at baseline and one receives an intervention while the other does not, any differences in outcome (good or bad) can be reasonably attributed to the intervention.
But RCTs aren’t without problems. Lots of them. To list a few:
- They can be expensive
- They can take years to complete
- They’re challenging for rare conditions, or for treatment effects with a long latency
- Their results might not be generalizable to the patient on the stretcher in front of you
- They might be unethical (“Does smoking really cause lung cancer?” Try getting that RCT past your local ethics committee.)
- They might literally be impossible
Sometimes, we have to rely on lower levels of evidence. This is a problem if you’re one of those people who views RCTs as the only evidentiary currency of merit. You’ll sometimes hear it said, even by smart people, that RCTs are the only way to be certain of cause-and-effect. That claim is simply not true.
Observational Studies
As the name suggests, these are studies in which you simply observe what happens/happened to patients, usually according to how they were treated. And because the intervention isn’t under the investigator’s control, there’s lots of room for bias and confounding. There are entire books devoted to observational studies—cohort, case-control, ecological and self-matched designs—and the main point I’ll make now is they’re sometimes lazily dismissed precisely because they aren’t RCTs. The implication here is they must, therefore, be hobbled by one or more biases, including selection bias (resulting from the selection and retention of subjects), information bias (resulting from inaccurate assessment of exposures and/or outcomes), and confounding (which arises when another factor distorts the association between an exposure and an outcome.)
Don’t like an observational study’s findings? “Association isn’t causation.” Wow, that was easy. And to be fair, the line works because it’s generally true. It’s very, very easy to conduct an observational study poorly and generate misleading or even harmful conclusions. But if you choose your study question carefully and you’re cognizant of the fact that patients in Group A might not be quite the same as those in Group B, it’s sometimes possible to design and analyze the study in such a way as to overcome the lack of randomization. Occasionally, the result is new knowledge that could never be the product of a randomized trial. Like “smoking causes lung cancer”, for example.
Case Reports
Case reports live near the bottom of the barrel. They’re uncontrolled, they involve a single patient (more, obviously, if we’re talking about a case series), and they tend to emphasize the bizarre and unexpected. Otherwise, they wouldn’t be published. But who among us doesn’t like a good case report? We’re clinicians, after all, and the fates of patients and the observations of colleagues interest us. Sometimes, they even educate us.
All case reports are not created equal, however. Let’s reflect on what makes a case report interesting and publishable.
- Novelty – Your patient on warfarin had a GI bleed? I’m sorry to hear it, but don’t waste time writing it up. Only the Journal of Please You Send Us Now the Money will be interested, and I’m pretty sure it’s not indexed. On the other hand, if your patient almost died from a drug interaction involving an over-the-counter antihistamine used by millions of people and THIS was caught on tape ……. well then, welcome to JAMA.3 (You can read more about this fascinating drug interaction leading to terfenadine toxicity in a post by Andrew Stolbach).
- Dramatic effect – Imagine that you admitted two patients with hyperglycemia last night. Are you going to write them up? Nuh-uh. But let’s say the patients didn’t actually have diabetes, the hyperglycemia was of biblical proportions, it happened right after they took an antibiotic, and it resolved when they stopped?
That might land your report in Annals of Internal Medicine, and it might even prompt an observational study that effectively kills the drug.4,5
- Suggestive of cause-and-effect – This can be tough with case reports, but it happens, as in the example immediately above. But from a toxicology perspective, my favourite example here is this case of a 17 y.o. with cardiovascular collapse after a large overdose of lamotrigine and bupropion.6 Take a look at the resuscitative sequence below. If the lipid didn’t help, what did? (Dopamine? Hilarious.)
Does this mean lipid rescue works all the time? No. That it should be used liberally? No. But this patient eventually went home, despite very nearly going to the morgue, and it’s hard to argue that lipid rescue didn’t play a role in that.
Like the terfenadine and gatifloxacin examples above, sometimes it’s the sharp eye of a clinician that identifies a case or group of them with immediate implications for public health. There are plenty of examples here too, but none more classic than the observation of a lone Aussie obstetrician who in 1961 hypothesized a link between unusual birth defects and a drug used during pregnancy.
The other great thing about case reports is they give junior trainees a chance to publish something. For many, it’s their first exposure to the nebulous world of biomedical publication – drafting, revising, submitting, responding to peer review and rejection, and eventually, if you stick with it, your name on PubMed until the end of time, and a line on your CV. I still remember how proud I felt of my very first publication – a case of native valve endocarditis caused by a weird bacterium.7 Over the years I’ve encouraged trainees to write up unusual cases, partly for their benefit and partly because it doesn’t hurt to be reminded that common drugs cause weird problems or that scurvy is still a thing, or that N-acetylcysteine might have some value in clove oil poisoning, even if it’s hard to be sure about that.8–11
I don’t mean to make case reports sounds more valuable than they are. But they aren’t without value, and they’re sometimes fascinating and incredibly instructive. Occasionally, they even make the case for cause-and-effect. They’re digestible, interesting and occasionally very, very important.
So by all means, worship at the altar of randomization if you like. But don’t dismiss observational studies and case reports just because they aren’t randomized, and think twice before saying “It’s only a case report.” Observational studies and case reports serve different purposes than RCTs and they tell us different things. And that’s precisely why we need them.
Critical-care says
Okay so if this article said “I say she because it’s almost always a woman wearing a (whatever garment)” it wouldn’t be tolerated. But subtle sexism against men is cool? Come on, why be so unprofessional instead of just focusing on quality content?