AI for impact? (and an update on how I am using this tool now)
Since my last post about hedging research findings, I’ve been thinking about how to improve the little CBC (clarity, brevity, context) Model example I shared. So of course, I dumped it into ChatGPT for a revision session. Here’s the prompt I gave and the output:
"Pharmacies in rural communities have felt negative impacts from efforts to manage patient medication costs like lower reimbursements and dispensing fees. Long-term these deficits may lead to more pharmacy closures. While efforts to manage medication costs should continue, rural community pharmacists, as the only health care providers in many rural communities, must be part of this discussion." Can you make suggestions for shortening and increasing the impact of the paragraph above?
Absolutely! Here’s a revised version of the paragraph that shortens it while enhancing its impact:
Revised Version:
"Efforts to manage medication costs, such as lower reimbursements and dispensing fees, have negatively affected pharmacies in rural communities. These financial strains could lead to more closures, jeopardizing access to essential health care services. It's crucial to include rural pharmacists in discussions about cost management, as they are often the only health care providers in these areas."
We went from 71 to 57 words. The language is far more active (I struggle greatly with passive language) “…negatively affected pharmacies…” vs “Pharmacies in rural communities have felt negative impacts…”. Definite improvements. But now that I see this new version I think it can be even better, so I played some more.
"Efforts to manage medication costs, such as lower reimbursements and dispensing fees, have negatively affected RURAL pharmacies. These financial strains could lead to more closures, further jeopardizing access to essential health care services. Rural pharmacists must be included in discussions about medication cost management."
Now we’re down to 44 words and the message feels cleaner and clearer. More importantly, I also feel like it’s me talking in this latest version. Not that the ChatGPT version didn’t use my words, it just organized them in a way that I wouldn’t.
I was once told by a faculty member that I wrote the way I spoke. At the time I took this as a complement, thinking that it meant my writing was clear and understandable (I don’t think this was their intention in retrospect). Academic writing, depending on your discipline, can be seemingly opaque. I have since switched up my disciplinary focus but have continued to write this way because I my points get lost with any other approach.
I was talking with some other faculty last week about students using GenAI for assignments. Interestingly, someone said that they can tell when students in their speech class had been using it, because all their assignments sound the same. They said, “[The students] sound like robots, no one talks like that.”
This makes sense really since we’re all mostly using the same few GenAI platforms. But this idea of us all sounding the same got me thinking. As we see a flattening of the voices we hear in research because we’re using GenAI to make our lives easier and more productive, this amalgamated and banal voice could come to be so common that we expect it. It could become the rule and not the exception. It could become a marker of rigour, while human speech and ideas, with their inherit messiness and contradictions, could become too pedestrian for academic discourse (like writing the way you speak 😜).
Of course I don’t think this is going to happen tomorrow, or even in my lifetime for that matter, but it’s a trajectory that isn’t impossible. It makes me sad to think that’s where we could end up. Human beings have always been the proverbial fly in the ointment of the human experience. We never behave the way we should and can’t help but to get in our own way. But that’s where the magic lies. AI can’t match that. So, in my ongoing quest to figure out how I am going to use GenAI to me do my work, you’re going to have to continue to put up with me writing how I speak 🤷♀️. Until next time!
(Words: 686)