Is it time to stop hedging our research findings?

Orange sheets of paper lie on a green school board and form a chat bubble with three crumpled papers.

One of the first lessons I learned as a baby researcher was to not overstate my research findings. This was especially pertinent for me because my first area of research was qualitative, which is by design exploratory in nature and is not generalizable to the larger population in the same way a large trial might be. So, I learned early to hedge my research findings, “We seemed to see a pattern” or “Our findings suggest a relationship”. These statements were then followed up with statements like, “But more research is needed”, and “Our sample was too small”, “We couldn’t collect data for a long enough time”, and “Our sample wasn’t representative”.

While all these hedges and limitations are justifiable from a research methods perspective, when translated to the public or policy makers it is easy to see why they might ask the question, “So what’s the point of your work?”

In health services research we know that each study is part of a larger whole where exploratory studies level up to confirmatory studies which then get fed into systematic reviews, meta-analyses, and now umbrella reviews. But even within these higher order study designs researchers are loathed to make definitive statements.

On one level I understand this hesitation, getting it wrong could have grave consequences. It could have consequences to the people who use or apply our findings. It could have consequences to our professional career. Increasingly it could also have consequences to our families.

Perhaps equally importantly, I wasn’t trained on how to communicate with the public or policy makers. I was trained to write research papers that were assumed to be read by other researchers (Honestly, as a junior researcher I didn’t consider who would read my work; the goal was simply to get it published).

Looking back at this assumption now it all seems kind of ridiculous. My research focusses on how to help community pharmacists change their practices to provide more direct patient care, but it hasn’t been written for them to read. I work with patients to answer research questions that are important to them, but I wrote papers that sit behind paywalls they aren’t likely to be able to see behind.

In a time when data and evidence are increasingly manipulated or completely disregarded for political or commercial benefit I needed to learn to adapt and demonstrate the impact of our work in new ways. Here’s how I’m starting to think about my work in new ways. I’m calling it the CBC Method:

1. Clarity – Tell the reader or listener what you found and how does it matter to them?

2. Brevity – Keep your comments short. Think no more than 5 baby sentences.

3. Context – Help your reader or listener understand what they can do (or not do) with your findings today.

I’m still refining the nuts and bolts of this approach for myself but wanted also wanted to provide an example of how it could work with tangible project. Here’s a recent article that I published with a team of collaborators about rural pharmacists’ perceptions of financial threats and opportunities.

Pharmacies in rural communities have felt negative impacts from efforts to manage patient medication costs like lower reimbursements and dispensing fees. Long-term these deficits may lead to more pharmacy closures. While efforts to manage medication costs should continue, rural community pharmacists, as the only health care providers in many rural communities, must be part of this discussion.

Is this perfect. No. But hopefully you get the idea. My goal was to summarize the main take home message, while not overstating my findings. Please let me know if you think I missed the mark 😉.

Also try the CBC method out for yourself. I would be very curious to see how you do with it!

(Words: 630)

Previous
Previous

AI for impact? (and an update on how I am using this tool now)

Next
Next

Feeling overwhelmed? Me too (so how do I manage that?)