Evidence based medicine: why are we even debating it?
This post comes from CPN member Carley King. Carley is a physiotherapist who has developed an interest in evidence based medicine during her Masters in Clinical Research. Here Carley reports on the recent debate on the value of Evidence based medicine at the CSP Congress.
[su_divider top="no"]
[su_pullquote]“Of course I support this motion, why are we even debating it?!” was the opening line at the PhysiotherapyUK conference debate this year about “This house believes that in the absence of research evidence, an intervention should not be used."[/su_pullquote]
Spoiler alert: I’m not sure that evidence-based medicine (EBM) as we understand it at the moment is fit for purpose. That’s my bias out in the open! But on hearing this opening line, I couldn't help but allow a small part of me to wonder if it was ridiculous to even consider an alternative...a very clever debating ploy there!
As the debate progressed, it became clear to me that there were some key issues arising from this motion, and I’ve tried to distil some of them here.
Firstly, we haven’t quite decided what constitutes research evidence. The randomised controlled trial (RCT) is widely seen as the gold standard of research evidence. It is the only known way that we can exclude biases…but we are humans with inherent biases, so how valid is the information that RCTs provide us with? Most research excludes populations with multiple comorbidities, so can we extrapolate trial findings to our increasingly complex patients?
Is physiotherapy as a profession, with all its nuances, really keeping RCTs as our gold standard of knowing that something works, or should we consider that RCTs are just one way of knowing something might work? I’m by no means suggesting we discard RCTs...but we’re putting this method on a pedestal, despite knowing that physiotherapy in practice has so many confounding variables that will influence how effective a treatment is.
If RCTs are the gold standard in research, this suggests that there is a silver standard, bronze, tin etc. So where does physiotherapy research sit in this ranking of research methods? It’s quite difficult to fully randomise many physiotherapy interventions, so we’re immediately settling for second rate research in accordance with this stance…is that really good enough? Or should we be looking to find a way of generating knowledge that actually suits the practice of physiotherapy, rather than being so medicalised? I think it’s fairly widely accepted that we use a combination of biological, humanistic and social approaches in practice, rather than purely biomedical, but I’m not sure this is reflected in our approach to research - is this a chasm we need to try to bridge? And how do we go about that?
The concept of RCTs being the gold standard in research raised a point from the audience, about using other disciplines to help inform how we generate research evidence. Do we make enough effort in looking to other disciplines (like design, education, engineering, law, poetry, or sociology, for example), for inspiration on how we might generate knowledge that is suitable for our means?
The patient should be at the heart of everything we do in physiotherapy. So if we don’t use RCTs, how do we know that our treatment isn’t in fact causing harm? Should an intervention that isn’t necessarily seen as harmful, but hasn’t been proven to be effective, still be considered harmful? Is it harmful if you are providing a treatment that isn’t effective and delaying the appropriate treatment that has been shown to be effective? And is the question of effectiveness the only metric we want to use to determine our approach to patient care?
As physiotherapists, we are constantly fighting to make ourselves seen as a reputable profession, and this involves using interventions that are based on evidence. So if we’re saying that actually, interventions can be used without any research evidence, where does this leave our professional standing with other professions, such as doctors, and indeed the general public? Are we at risk of losing that credibility that’s taken so long to establish?
Or are we aiming for the wrong end point? Evidence based medicine originated from the medical field, not physiotherapy per say, we’ve just jumped on the bandwagon (not that I’m criticising this, I can certainly see why we have). So are we trying to make something fit like a square peg in a round hole? Do we need more of a conversation about what actually constitutes research evidence, and how this is operationalised in practice? Do we need to broaden our minds as to what can be seen as research evidence, without having this hierarchy of one study being seen as better than another, limiting the methodology of research being conducted (and published)?
When the final vote was cast, it was clear that the majority of the room rejected the idea that in the absence of research evidence, an intervention should not be used. That is not to say that evidence based medicine in itself was being rejected, but rather that we need a conversation around some of the questions being raised here, and in other forums. The number of questions posed in this blog alone suggests that there is much more thinking to be done around this, and I would love to hear others’ thoughts...let’s keep this debate going, and continue challenging the profession to think ‘otherwise.’