From Algorithms to Articles: AI in Scientific and Medical Writing and Application

Reading time: 5 minutes

Keighley Reisenauer, Ph.D

Chat Generative Pre‐trained Transformer (ChatGPT; OpenAI, San Francisco, CA) and other artificial intelligence (AI)-based generative software have taken hold within social media and creative spheres, to both the delight and annoyance of the users within those spaces. Initially, outputs were flawed or, in some cases, completely incorrect, leading many to believe AI could never be integrated into research or medical fields. The primary domains in which ChatGPT has demonstrated promise include scientific and  medical writing, diagnostics and clinical decision making, and community outreach. 

Scientific And  Medical Writing

Many benefits of AI for scientific and medical writing and research have recently become clear1. In fact, the International Committee of Medical Journal Editors (ICMJE) and the World Association of Medical Editors (WAME) issued recommendations for the use of AI in scientific writing and peer review, respectively.6,7 

Considering the broad usage of ChatGPT in scientific publications and communications, improvements to workflow and accessibility has been welcomed by the scientific community.2,3 From the planning stages of article writing, ChatGPT can generate relevant databases to search, keywords and related search fields, or articles to include in literature reviews. In the writing stages, the system can generate outlines with increasing levels of detail, improve language and grammar (especially helpful for those whose native language is not the same as the publishing language4,5), or even suggest ways to incorporate feedback to drafts. Together, the time to generate publications – often one of the most time-consuming and difficult components of the research process for most scientists — decreases and the time able to spend analyzing or generating data increases, thus, advancing the field as a whole.6

Overall, utilizing AI can result in improvements to workflow, publication times, and engagement. 

Diagnostics And Clinical Decision Making

Beyond written academic content, AI computer systems are used extensively in medical sciences. Currently, the most common roles for AI in medical settings are clinical decision support and imaging analysis.6,7 These systems are specific to their task, such as providing treatment guidelines for cancer patients based on their magnetic resonance imaging radiomics and predicting aging‐related diseases. 

Community Outreach

Perhaps the most intriguing application of AI’s usage is for medical interactions with the public. Patients are increasingly turning to the internet for information about cancer, with 80% of US adults reportedly using the internet to seek health information.10 ChatGPT is already being used as the newest version of medical advice websites such as WebMD, where patients can describe their symptoms in the hopes of receiving medical recommendations. Of course, the results are often inaccurate and misaligned. Following a review of 118 articles, one study determined that ChatGPT can act as a “clinical assistant,” providing valuable support in research and scholarly writing.9 

Given the importance of accurate information in the field of cancer research and treatment, determining the accuracy of AI (mis)information outputs from chat platforms such as ChatGPT is critical to clinicians and, more broadly, health and medical communicators. When compared to the National Cancer Institute’s  “Common Cancer Myths and Misconceptions” web page, though, the AI software performed admirably, indicating its efficacy not in generating medical advice, but in debunking and providing accurate information in lieu of common misconceptions.11 The information itself was consistent, accurate, written at a readable level, and lacked explicitly harmful information. Importantly, there was not a clearly identified area where this system may be susceptible to misinformation, positioning the system to be an effective tool for healthcare communications.

Considerations and Caveats

In all the scenarios described above,  AI is not able to seamlessly fit into the medical field; instead, it requires a great deal of training and care. Leading concerns regarding the use of these AI software includes: 2,3,6

  • Lack of context
  • Inaccurate or biased information
  • Over-reliance / reduction in creative and critical thinking and the ability to make independent judgments about the quality of writing
  • Technical limitations
  • Cost
  • Risk of plagiarism 

One study deeply evaluated these concerns by analyzing the efficacy and accuracy of human, AI, or AI-assisted writing methods.6 Indeed, AI reduced the time for writing but had significant inaccuracies. Another evaluated experiential evidence about ChatGPT’s performance as a language editor and writing coach.4 As a language editor, the software performed well, identifying 5-14 edits per paragraph, but was inconsistent as a writing coach, oftentimes altering the meaning of the sentence or choosing the inaccurate technical term. 

In sharing and comparing results from medical studies, ChatGPT has been introduced in the evaluation of case studies. Rather than generating the review, the system was asked to evaluate the patterns and results of 15 published, peer-review case studies.9 The case reports received mixed ratings from peer reviewers, with 33.3% of professionals recommending rejection. The reports’ overall merit score was 4.9±1.8 out of 10. Ultimately, the system was more accurate in generating text than in analyzing the results of published articles. 

AI is able to enhance patient treatment and research, but not without failing on the fronts of accuracy, authorship, and bias. ChatGPT’s responses to user-based medical questions are inherently limited by data inputs, oftentimes resulting in vague responses.8 Data can be misinterpreted or newer data has not yet been integrated to update the system. Indeed, AI systems have not been shown to qualitatively surpass the quality of human-based interactions from medical providers to their patients.

Ultimately, the use of AI tools as a supplement rather than a replacement and the effective training of early-career scientists in appropriate ways to utilize these tools can alleviate these concerns and permit the continued, intelligent integration of these powerful tools.3,6 

From academic writing, to medical analysis, to community-healthcare interactions, ChatGPT is quickly finding footholds. Always improving and to be judiciously evaluated, the outputs are ever-improving and usages are consistently expanding. As so eloquently said by Xue, et. al. 

“Despite our initial lack of preparation for the game‐changing ChatGPT technology, the development of AI is unstoppable. The best course of action is to embrace it, use its capabilities to improve our lives, and foster mutually beneficial relationships by evolving it in clinical medicine.”8

Image: Microsoft Designer AI. “Illustration of a robot working at a desk representing AI in scientific and medical writing.” 2024, https://designer.microsoft.com

Edited by Sara Musetti Jenkins

Works discussed:

1. Seckel, E., Stephens, B. Y. & Rodriguez, F. Ten simple rules to leverage large language models for getting grants. PLoS Comput. Biol. 20, e1011863 (2024).

2. Huang, J. & Tan, M. The role of ChatGPT in scientific communication: writing better scientific review articles. Am. J. Cancer Res. 13, 1148–1154 (2023).

3. Chandra, A. & Dasgupta, S. Impact of ChatGPT on Medical Research Article Writing and Publication. Sultan Qaboos Univ. Med. J. 23, 429–432 (2023).

4. Lingard, L. et al. Will ChatGPT’s Free Language Editing Service Level the Playing Field in Science Communication?: Insights from a Collaborative Project with Non-native English Scholars. Perspect. Med. Educ. 12, 565–574 (2023).

5. Amano, T. et al. The manifold costs of being a non-native English speaker in science. PLOS Biol. 21, e3002184 (2023).

6. Kacena, M. A., Plotkin, L. I. & Fehrenbacher, J. C. The Use of Artificial Intelligence in Writing Scientific Review Articles. Curr. Osteoporos. Rep. 22, 115–121 (2024).

7. Haver, H. L. et al. Appropriateness of Breast Cancer Prevention and Screening Recommendations Provided by ChatGPT. Radiology 307, e230424 (2023).

8. Xue, V. W., Lei, P. & Cho, W. C. The potential impact of ChatGPT in clinical and translational medicine. Clin. Transl. Med. 13, e1216 (2023).

9. Kadi, G. & Aslaner, M. A. Exploring ChatGPT’s abilities in medical article writing and peer review. Croat. Med. J. 65, 93–100 (2024).

10. Calixte, R., Rivera, A., Oridota, O., Beauchamp, W. & Camacho-Rivera, M. Social and Demographic Patterns of Health-Related Internet Use Among Adults in the United States: A Secondary Data Analysis of the Health Information National Trends Survey. Int. J. Environ. Res. Public. Health 17, 6856 (2020).

11. Johnson, S. B. et al. Using ChatGPT to evaluate cancer myths and misconceptions: artificial intelligence and cancer information. JNCI Cancer Spectr. 7, pkad015 (2023).

Leave a comment

Create a website or blog at WordPress.com

Up ↑