You can call me Hal

In only the last few weeks I have stumbled across articles where a certain Elon Musk predicts AI that is smarter than anyone on Earth could exist as early as 2025 (Guardian 2024a), or a thinktank warning of AI wiping out up to 8 million jobs in the UK (Guardian 2024b).

AI – as regulatory medical writers where is our future heading? For our style of writing that is so reliant on data, structure, and the absence of florid prose, it is easy to see us as being one of the first against the wall in this brave new world. It is probably important to first take a step back and admit that while “disruptors” and “move fast and break things” are modern buzz words, they are, ultimately, not new phenomena: from the mechanisation in farming – we no longer require an army of farm hands to harvest a field of wheat – to the industrial revolution, change is as certain as death and taxes. From the Luddites during the Industrial Revolution, fear of change and the unknown is a powerful protectionist motivator, and, seemingly, quite a natural human response.

Automation: its not new

In general, as a child, I was never really the biggest fan of horror movies. However, over time I ended up noticing an interesting phenomenon: the same movie watched during the daylight hours did not elicit a fraction of the fear as the same movie watched under the cover of darkness. Of late, it doesn’t seem that a day goes by without some news article or reporting of something about AI; a term almost so ubiquitous that I almost don’t feel the need to define at first use.

Unfortunately, it can feel that much of what is presented about AI sits in one of two camps at the polar extremes: either a tech bro extolling its virtues to usher in a new Utopian existence where the benevolent hand of AI assists us in every aspect of our lives, improving our very existence; or some sort of Dystopian future where we are moments away from The Singularity – the moment at which humans are no longer in control of AI – where we tug our forelock and submit before our new overlords.

Returning to my initial childhood observation with horror movies, I wonder if there is a certain aspect where the darkness that allows our minds eye to fill in the gaps with a chainsaw wielding assassin, say, also allows the unknown of a black box, if you will, give rise to nightmares.

In only the last few weeks I have stumbled across articles where a certain Elon Musk predicts AI that is smarter than anyone on Earth could exist as early as 2025 (Guardian 2024a), or a thinktank warning of AI wiping out up to 8 million jobs in the UK (Guardian 2024b).

As medical writers where does that leave us? For our style of writing that is so reliant on data, structure, and the absence of florid prose, it is easy to see us as being one of the first against the wall in this brave new world. So, where do we go from here? It is probably important to first take a step back and admit that while “disruptors” and “move fast and break things” are modern buzz words, they are, ultimately, not new phenomena: from the mechanisation in farming – we no longer require an army of farm hands to harvest a field of wheat – to the industrial revolution, change is as certain as death and taxes. From the Luddites during the Industrial Revolution, fear of change and the unknown is a powerful protectionist motivator, and, seemingly, quite a natural human response.

Nor is it new for medical writers

As medical writers it is important to look in the rearview mirror and note that quite large changes have already been lived through in our profession, as seen in the digitalisation from paper submissions to concatenated pdfs submitted on a portal. To the medical writer this can feel like quite a comforting change and more evolution rather than revolution as, despite the change everything feels the same, being document centric. With the introduction of automation and certainly with the promise of generative AI there feels a shift from document centricity to becoming data centric. For the rules-based automation solutions I can still see a future for the medical writer, albeit a different one, with the need to upskill and learn new tools and processes. It is generative AI where I can envisage a medical writer-less future: where the concept of documents as we know them is no longer, and what is required is to “arrange” data in such a way that when submitted to a Health Agency is for them to ask a “Siri” of sorts suitable prompts for it to produce a suitably sourced comprehensive answer.

The challenges for medical writers to remain relevant in such a data centric landscape are not insignificant. No matter what, we will all have choices to make on the direction(s) to take to remain future proof as possible. For rules-based automation tools and solutions I can see the certain advantages of improving accuracy (when little errors are pointed out to us we’d swear blind we couldn’t possibly have made) and removing certain mundane tasks from the life of the medical writer (ie, changing from future to past tense).

These benefits would be similarly evident with more generative AI, where a medical writer could easily find themselves confronted with their own impotence in the face of the speed and accuracy that AI can do the same task. Having used some of these tools, I can attest to the steep learning curve and change in mentality that is required. And, at times, I can imagine to feeling like what an experienced horse rider felt like at the advent of the motor car, wondering what all the fuss was about: they could travel further and quicker, and more efficiently than travelling by car. But as the car became more reliable and the infrastructure of better roads improved the choice of car over a horse becomes a no-brainer.

In the long run, as AI gets more accepted, embedded, and usable will it make us (in the family of man kind of way) lazy? For all the talk of ensuring that a human remains in the loop and AI frees the human to do the strategic thinking, in the long term I can see the more we get used to using AI the more likely it is that we accept its response without question. I think back to school, where, as not the biggest fan of maths lessons, I was more than happy to outsource, so to speak, to the calculator. While an essential tool for doing certain complex calculations quickly, I would admit that my mental arithmetic is certainly not as good as it could be and have had, on occasion, used a calculator to get the answer to things that were certainly not at the outer limits of human intelligence. To that end, for medical writing, or any profession for that matter, having an idea of the ballpark of potential answers is something of which to never lose sight.

References

Guardian 2024a Elon Musk predicts superhuman AI will be smarter than people next year. Available at: https://www.theguardian.com/technology/2024/apr/09/elon-musk-predicts-superhuman-ai-will-be-smarter-than-people-next-year. Viewed on 18Apr2024.

Guardian 2024b AI ‘apocalypse’ could take away almost 8m jobs in UK, says report. Available at: https://www.theguardian.com/technology/2024/mar/27/ai-apocalypse-could-take-away-almost-8m-jobs-in-uk-says-report. Viewed on 18Apr2024.