CC BY-NC-ND 4.0 · Indian J Radiol Imaging 2024; 34(03): 569-570
DOI: 10.1055/s-0044-1778728
Letter to the Editor

Comment: ChatGPT: Chasing the Storm in Radiology Training and Education

1   Faculty of Medicine, Humanitas University, Milan, Italy
› Author Affiliations

ChatGPT: Chasing the Storm in Radiology Training and Education

We read with great interest the article authored by Dr. Sodhi and colleagues, in which they explore the possible applications and effects of ChatGPT in radiology training and education.[1] We wish to put forth several additional considerations related to the ideas presented that we believe merit further examination and discussion.

First, the authors clearly illustrated the inaccuracies associated with employing ChatGPT in clinical decision support systems by posing the question, “Is There Any Role of Transabdominal Ultrasound in Respiratory Distress in Newborns?”. It is imperative to recognize that large language models (LLM) like ChatGPT heavily rely on the data used for their training. This reliance can introduce biases or errors if the training data lack diversity or representation. Consequently, this could lead to erroneous interpretations or recommendations, potentially jeopardizing patient care. To mitigate these biases, it is crucial that the model undergoes training on more comprehensive and diverse datasets. Additionally, it is essential to examine the ethical and legal ramifications of utilizing an artificial intelligence (AI) model to make critical radiological decisions. Radiologists bear a professional responsibility to provide accurate and dependable diagnoses. While ChatGPT can provide assistance with information, it may take some time before it can actually supplant the knowledge and clinical judgment of radiologists. Striking a balance between the use of AI tools and preserving the autonomy and accountability of health care practitioners are of paramount importance. In this regard, it is imperative to emphasize ongoing research and collaboration involving AI experts, radiologists, and regulatory bodies to establish guidelines and standards for the secure and responsible utilization of AI in radiology.[2] This includes addressing concerns related to bias, privacy, transparency, as well as defining the scope and limitations of AI models like ChatGPT.[3]

Furthermore, the authors address the role of ChatGPT in research and education. We agree with the authors regarding the usefulness of ChatGPT in conducting literature reviews, especially in generating manuscript outlines. However, on the contrary, we find that it lacks complete accuracy when tasked with composing scientific articles. For example, when we ask ChatGPT to provide the list of references for texts that it generates, it tends to invent fake references which some sources have regarded as having been generated due to the limitations of the current design of OpenAI's GPT language model.[4] Additionally, the use of AI in writing raises questions about authorship. We believe it would be more appropriate to acknowledge ChatGPT in the acknowledgments section. Regarding ChatGPT's role in radiology education, it has demonstrated favorable performance in radiology and nuclear medicine examinations. Given that these examinations involve solving clinical problems through keyword understanding and AI-driven reasoning, this seems evident. Importantly, AI possesses a level of versatility that humans do not, capable of storing and extracting vast amounts of data. Therefore, we believe that ChatGPT can not only assist residents as a learning tool but also has the potential to aid educators in creating assessments for medical schools and residency programs, offering a more secure and beneficial application of this technology. One significant potential consequence for radiology education is its impact on dissertation writing. In Italy, it is legally mandated for radiology residents to successfully defend a thesis before attaining the title of Specialist in Radiology. In a future where a substantial portion of the work relies on LLM technologies, it becomes necessary to reconsider the purpose of writing a dissertation.[5] [6]

In conclusion, we acknowledge that advancements in AI technology and natural language processing have the potential to enhance the accuracy and reliability of AI models if the challenges we mentioned are adequately addressed. While we appreciate the authors for their contribution to this important topic, we would greatly value a response from them, expressing their perspective and any thoughts on the points we have raised.



Publication History

Article published online:
06 February 2024

© 2024. Indian Radiological Association. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Thieme Medical and Scientific Publishers Pvt. Ltd.
A-12, 2nd Floor, Sector 2, Noida-201301 UP, India