Subscribe to RSS

DOI: 10.1055/s-0044-1800801
Navigating Artificial Intelligence in Scientific Manuscript Writing: Tips and Traps
- Abstract
- Introduction
- AI and Large Language Models
- Review of Literature
- Study Design and Methodology
- Authorship and Other Ethical Quagmires
- Manuscript Drafting
- Clinical Importance of AI in Medical Manuscript
- Hidden Flaws of Artificial Intelligence
- How to Cite the AI Tools
- Prompts and Inputs
- Conclusion
- References
Abstract
It is being increasingly recognized that the strategic use of artificial intelligence (AI) can catalyze the process of manuscript writing. However, it is imperative that we recognize the hidden biases, pitfalls, and disadvantages of relying solely on AI, such as accuracy concerns and the potential erosion of nuanced human insight. With an emphasis on crafting effective prompts and inputs, this article reveals how to navigate the labyrinth of AI capabilities to create a good-quality manuscript. It also addresses the evolving guidelines from various publishers, shedding light on how to “leverage the digital genie” responsibly and ethically. We further explore how and which AI tools can be harnessed for literature reviews, executing statistical analyses, and polishing the language of the manuscript. Providing practical strategies for maximizing AI's benefits, this article underscores the indispensable value of human creativity and critical thinking, stressing that while AI can “streamline the mundane,” the author's insight remains vital for profound intellectual contributions.
#
Keywords
artificial intelligence - generative AI - large language model - manuscript - scientific manuscriptIntroduction
The composition of an original manuscript for radiological and other medical journals constitutes merely the culmination of an extensive and intricate process. Prior to the actual drafting of the manuscript, various stages must be meticulously executed, that is, literature review on the relevant topic, formulation of the experimental hypothesis and the research question, appropriate design of the study, calculation of the requisite sample size, securing approval from the ethics committee, recruitment of participants (obtaining informed consent) and execution of the study protocol, data acquisition and its rigorous statistical analysis, and finally drafting of the interpretation of the analysis and its potential usefulness in generation of a scientific evidence.
The art of manuscript writing transcends mere transcription of the scientific findings; it requires a mix of creative ingenuity and meticulous, detail-oriented labor. The act of scientific manuscript writing demands not only the imaginative skill to conceptualize complex ideas but also attention to precision and accuracy. This duality ensures that the narrative is both intellectually stimulating and meticulously substantiated. Hence, the key to use artificial intelligence (AI) effectively and meaningfully, while writing a manuscript, is to use it in augmenting the rigors of investigation while cautiously impeding the infringement of AI into the steps requiring creative nuances provided by the human authors, also known as “the human touch.” A provocative quote by Pablo Picasso, “Computers are useless. They can only give you answers,” underpins the inherent limitations of AI in generating novel insights and fostering intellectual creativity. A pernicious drawback of generative AI is hallucination, wherein it fabricates information that appears plausible but is entirely fictitious.[1] Another critical concern is AI drift, the gradual deviation of AI-generated content from the intended topic or style over time.[2] AI's propensity for providing inadequate or nonexistent references necessitates meticulous fact-checking by human authors. Finally, the risk of plagiarism is significantly heightened with AI-generated content.[3]
Through a comprehensive examination of AI's applications in literature review, statistical analysis, data management, and manuscript writing, this article aims to provide a cogent overview of how AI should be utilized in manuscript writing. The dos and the don'ts discussed will serve as a practical guideline for the readers to augment the symbiotic relationship between the nuanced critical thinking of the authors and the multifaceted ability of AI to bestow an invaluable gift of time so that the researcher is able to allocate more effort for intellectually stimulating aspects of scientific manuscript writing.
This review employs a narrative integrative methodology to appraise existing literature on AI tools relevant in the field of manuscript writing. The goal is to evaluate the capabilities, strengths, and limitations of various AI tools, offering a comprehensive overview for researchers and practitioners. A systematic search was performed across databases such as PubMed and Google Scholar, using specific keywords. The inclusion criteria included studies that evaluate or discuss AI tools, published in peer-reviewed journals, and providing empirical data, while duplicates were excluded. The synthesized data reveal key themes and trends in the AI landscape and its ethical use in research.
#
AI and Large Language Models
AI: The overarching field involving intelligent systems and technologies. Example: autonomous vehicles (e.g., Tesla's Autopilot).
-
Machine learning (ML): A subset of AI focused on learning from data and improving performance over time. Example: e-mail spam filters.
-
Neural networks (NNs): A subset of ML that uses algorithms inspired by the brain to recognize patterns. Example: image recognition systems (e.g., Google Photos).
-
Deep learning (DL): A subset of NNs with multiple layers for learning complex patterns. Example: voice assistants (e.g., Apple's Siri).
-
Generative AI: A type of DL that generates new content from learned data. Example: AI-generated art (e.g., DALL·E by OpenAI).
-
Natural language processing (NLP): Broad field within AI that focuses on the interaction between computers and human language, for example, Google Translate.
-
Large language models (LLMs): A subset of NLP and generative AI that focuses on understanding and generating human language. Example: Chatbots (e.g., ChatGPT by OpenAI, Microsoft, Copilot).
-
Symbolic AI: Traditional AI that relies on rules and logic to simulate human reasoning. Example: rule-based diagnostic systems in health care.
-
– Expert systems: A type of symbolic AI that mimics the decision-making of human experts in specific fields. Example: MYCIN, an early expert system for medical diagnosis.
-
– Knowledge-based systems: Systems that use extensive domain-specific knowledge to perform tasks. Example: IBM Watson.
-
[Fig. 1] depicts the hierarchies related to AI. The figure was generated by an AI tool, Julius AI.[4]


#
Review of Literature
A robust literature review forms the basis of any scholarly inquiry and is defined as a “systematic, explicit, and reproducible method for identifying, evaluating, and synthesizing the extant corpus of completed and documented work by researchers, scholars, and practitioners”[5] It facilitates a comprehensive understanding of existing knowledge, identifies lacunae in the available research, and guides about the research methodology to be pursued.
There are several suggested steps to perform literature review consisting of the following: identifying the purpose of review, source selection, choosing search terms, running the search, screening of the search and the quality appraisal, and finally synthesizing the results of the review. The sources of literature search can be various indexes (PubMed, IndMED, Embase), journal collections (Medknow, DOAJ), clinical trials (CTRI, EU clinical trials registry), and systematic review (Cochrane library). Traditionally, the search should be done using the Boolean operators (AND, OR, and NOT) while combining them with the specific keywords within parenthesis and quotation marks.[6]
-
NLP tools, such as ChatGPT and Copilot, possess the remarkable ability to distill vast quantities of text into succinct summaries. Theoretically, this proficiency should permit the researchers to swiftly discern the essential insights of numerous papers, thereby obviating the need for exhaustive perusal. In practice, however, the generated contents often exhibit a proclivity for repetitiveness, redundantly presenting similar information. Also, generative AI fails to capture the nuanced intricacies inherent in scholarly studies. For example, on giving a prompt “Role of Diffusion Tensor Imaging (DTI) in Hypoxic-Ischemic Encephalopathy (HIE),” ChatGPT enlists five studies (2012 to 2016). While these results are accurate and appropriately summarized, the literature is repetitive, relatively nonrecent, and lacks several points worthy of exploration in these scholarly articles. Compounding these issues is the phenomenon of hallucination and lack of proper referencing.[1] A review of recent studies reveals 66% of the reviewed cases included fabricated or bogus papers. Manual verification further revealed that many references supplied by ChatGPT were outdated or fictional, highlighting the importance of thorough validation and caution when relying on AI-generated content for writing a manuscript.[7]
-
Scopus AI: It is an intuitive generative AI tool devised by Elsevier that uses NLP so that a search can be done using the English language rather than Boolean operators. It draws only peer-reviewed contents from the articles that are indexed on Scopus and only those published after 2003 are considered. Additionally, there are options such as expanded summary, concept map, foundational documents menu, and topic experts. A study by Mozelius et al[8] found the option of foundational papers particularly impactful in extracting the seminal works and pointing toward the source of any idea using Scopus' citation graph technology.
-
Elicit: It uses GPT-4 to automate parts of the workflow for literature review searching. When asked a research question, it lists the most relevant papers in the tabular format and can summarize various parameters related to the study as columns such as summary, methodology, interventions, outcomes, summary of discussion, and other parameters (total 34 parameters). Elicit is limited to publications in Semantic Scholar.[8]
-
Other resources for literature search are ResearchRabbit (citation-based literature mapping tool), SciteAi (provides citation context for scientific papers, helping researchers evaluate the credibility and impact of scholarly articles), Keenious, etc.
#
Study Design and Methodology
-
Data collection, entry, and management: A variety of AI tools have been specifically developed to enhance the efficiency of data analysis, including platforms such as Julius.AI and Unriddle.AI, IBM Watson, Alteryx, among others. Additionally, AI chatbots can be employed to facilitate the collection of follow-up data from patients, thereby addressing a significant drawback in evidence-based radiology: the insufficient inclusion of patient values in the decision-making and evaluation processes.
-
Study subjects: AI algorithms can help determine optimal sample sizes by analyzing past studies and predicting the necessary statistical power. AI tools might offer differing recommendations, leading to potential inaccuracies. Thus, it is advisable to use statistical formulas or consult a statistician. For instance, AI tools gave different recommendations for a study on the role of magnetic resonance imaging (MRI) in assessing Placenta Accreta Spectrum: Chat GPT suggested 150 to 200 participants, while Copilot recommended at least 8 patients.[9] [10]
-
Data extraction: AI algorithms meticulously parse through vast amounts of imaging data, identifying and extracting pertinent information with precision.[11] Fink et al compared ChatGPT and GPT-4 in analyzing 424 computed tomography (CT) reports of lung cancer follow-up scans.[12] ChatGPT showed average performance in extracting lesion parameters (67%), identifying metastatic disease (90%), and correctly labeling oncologic progression (F1 score of 0.91). GPT-4 performed significantly better but was not entirely accurate. In another study by Guellec et al, Vicuna, an open-source LLM, analyzed 2,398 brain MRI reports in French from patients with headaches.[13] Vicuna demonstrated high accuracy in identifying normal or abnormal studies (96% sensitivity and 99% specificity) but had lower accuracy in identifying MRI findings leading to headaches (88% sensitivity and 73% specificity). Various LLMs have been used for extracting information from radiology reports. Initially, recurrent neural network (RNN) based models like ELMo were considered; however, with advancements in transformer-based models, several new approaches emerged:
-
– Encoder-based models like BERT (2018) became popular for their ability to understand context in text.
-
– Decoder-based models such as GPT-3 (2020) and GPT-4 (2023) focused on generating coherent text.
-
– Models combining both encoder and decoder blocks, like Megatron-LM (2019), offered enhanced capabilities by leveraging the strengths of both architectures.
-
A study by Hu et al explored ChatGPT's ability to extract information from 847 CT reports of lung cancer.[14] Good performance was demonstrated in extracting tumor location and dimensions (long and short diameters). When prior medical knowledge was incorporated into the prompt, significant improvements were observed in extracting details about tumor spiculations, lobulations, and pleural invasion or indentation. However, tasks related to tumor density and lymph node status did not show better performance. The authors suggested that ChatGPT was less effective compared with BERT-based multiturn question-answering approach.[14]
Statistical Analysis
Various AI tools have been developed that can aid in data analysis. AI tools like ChatGPT, Julius AI, IBM Watson Studio, Google Cloud AI Platform, and Microsoft Azure Machine Learning can identify whether data are categorical or continuous and recommend appropriate statistical tests.[15] They can also check if data meet the assumptions required for specific tests, such as normal distribution for t-tests and analysis of variance (ANOVA), and suggest alternative approaches if assumptions are violated. For example, they might recommend Welch's ANOVA if homogeneity of variance is not met or nonparametric tests like the Mann–Whitney U test if normality is not achieved. Additionally, AI tools can generate various data visualizations, automating the creation and customization of charts and graphs, which is particularly useful for large datasets. This automation allows researchers to focus more on interpreting their data rather than on the technical details of visualization. Each AI tool has its unique strengths and weaknesses. The choice of tool should align with the specific needs and resources of the user.
-
ChatGPT is versatile and user friendly but may lack precision in statistical tasks. It can generate basic text-based descriptions of data but lacks advanced visualization tools. While it can process information, its strength lies in language generation, not deep data analysis.[16]
-
Julius AI offers several useful features for researchers even without a deep background in data science. It has a workspace-like interface designed for data analysis with a clear focus on data visualization (charts, graphs, and others). It integrates with popular data tools and platforms such as Excel, CSV files, PDF, text files, and Google sheets. It can run statistical analyses, suggest next steps, and answer questions about the analyses.[17]
-
Tools such as Hugging Face or Dataiku can be used to analyze large-scale data, and they provide a comprehensive toolkit for data handling. However, they can be complex and costly and may require training.
Practical step-by-step guide to conduct statistical analysis using ChatGPT or Julius AI[17] [18]:
-
Step 1: Upload the dataset.
-
Step 2: Define the requirements.
Suggested prompts:
-
– Data cleaning:
-
○ “Clean the data by removing duplicates, highlighting missing values, and performing other necessary tasks. Please provide a summary of the actions taken once completed.”
-
○ “Identify any quality issues, such as spelling mistakes in the Excel sheet.”
-
○ “Apply capping to mitigate the impact of extreme values while preserving data integrity.”
-
-
- Summary statistics: “Generate summary statistics for the dataset, including measures such as mean, median, and range.”
-
- Data visualization: “Create a histogram to visualize the data.”
-
Step 3: Request recommendations. Julius AI provides automated recommendations based on the data structure.
-
- Visualization suggestions: “Generate two different plots for the data and explain the rationale behind choosing these visualizations.”
-
- Statistical analysis recommendations: “Recommend statistical analyses to understand the impact of MRI findings on the diagnosis of periampullary carcinoma.”
-
- Plot-specific analysis: Julius AI provides multiple options for visualization such as scatter, box plot, histogram, pair plot, and heatmap.
-
-
Step 4: Interpreting results:
-
- Prompt: “I have conducted a [specific test]; the result is [result]. What does this data indicate?”
-
- Multiple regression analysis: “Provide the results of the multiple regression analysis and explain the coefficients.”
-
#
#
Authorship and Other Ethical Quagmires
The use of AI in medical research presents several ethical dilemmas, including issues related to bias, informed consent, accountability, and transparency.[19] AI systems can exhibit bias because they are trained on specific datasets, which might distort citation practices and favor certain viewpoints. Additionally, the decision-making processes of AI are often unclear and difficult to interpret, leading to a “black box” effect that can undermine trust between AI systems and health care professionals.[20] This situation raises concerns about who is responsible for errors and how legal responsibilities should be defined for developers and users of these technologies. Informed consent becomes more challenging with AI, as patients need to be fully aware of the potential risks and benefits of participating in research. The extensive use of data by AI also brings up concerns about data ownership and privacy. In a recent legal case, i.e., Dinerstein v. Google and Project Nightingale and Ascension, it was alleged that the University of Chicago shared medical records with Google containing enough information that enabled Google to potentially re-identify patients, violating HIPPA (Health Insurance Portability and Accountability Act) compliance.[21] [22]
Ethical guidelines for AI systems like ChatGPT have been set by organizations such as the European Union (EU), focusing on aspects like human oversight, technical reliability, privacy, and accountability.[23] These guidelines aim to ensure safety and reduce bias, yet challenges remain concerning who is responsible for AI-generated content. The General Data Protection Regulation (GDPR) mandates strict rules on handling personal health data, especially in automated decision-making contexts.[24] To tackle these ethical issues, the proposed AI Act is designed to establish standards for transparency and oversight.
The International Committee of Medical Journal Editors (ICMJE) specifies four criteria for authorship[25]: (1) significant contributions to the work (including design, data acquisition, analysis, and interpretation), (2) drafting or critically reviewing the manuscript for significant intellectual content, (3) final approval of the manuscript for publication, and (4) accountability for all aspects of the work. Since AI and AI-assisted technologies cannot be held accountable for manuscript accuracy, integrity, and originality, they are not permitted to be listed or cited as coauthors by major publishers. The World Association of Medical Editors (WAME) similarly advises against crediting AI tools as authors, emphasizing that human authors must ensure the accuracy of AI-generated content.[26] [Table 1] provides recommendations by various publishers for AI usage in writing and reviewing manuscripts.[27]
Editors need reliable tools to identify content produced or altered by AI, and these tools should be available to all editors, regardless of their financial resources.
#
Manuscript Drafting
Generative AI tools such as ChatGPT and Microsoft Copilot are excellent tools for drafting the contents of manuscript, improvising upon the language, and offering suggestions on flow and structure of the manuscript. Its NLP capabilities allow it to assist in a wide range of writing tasks. However, the use should be complemented by specific specialized tools for grammar correction, plagiarism detection, or reference management.
-
For language check, tools such as Grammarly, Writefull, and LanguageTool can be used, which employ AI-based algorithms to provide advanced writing suggestions.
-
Plagiarism detection: Turnitin (iThenticate) or PlagScan are rule-based software with AI elements that uses pattern recognition techniques to find potential matches. Grammarly premium uses AI in providing plagiarism detection features.
-
Reference Management: Mendeley is a primarily rule-based software. But it has AI elements, enabling the features such as PDF annotation and recommendation systems. Other software programs such as EndNote and Zotero use rule-based algorithms for citation formatting and reference management and do not rely on AI.
-
Checking accuracy and flow: Writefull utilizes AI to suggest improvements in language accuracy and flow, based on extensive training on academic texts, whereas other tools such as Scribe (rule based) and SciFlow (rule based with some AI elements) can also be used.
Pinto et al conducted a comparison between ChatGPT and a researcher with 10 years of experience in writing case reports, revealing that the human-written manuscript demonstrated superior presentation quality and nuanced writing.[28] ChatGPT struggled to capture the unique aspects of the presented data, resulting in a less refined case report. Among 22 reviewers, 12 could accurately distinguish the human-authored manuscript, but 4 mistakenly identified it as AI generated. The human manuscript received significantly higher scores for draft quality and effectively addressing nuanced points. Analysis with GPTZero showed that the human manuscript had notably higher “average perplexity score” (measuring complexity or unpredictability of the text), “burstiness score” (reflecting the variability in sentence structures), and “highest perplexity of a sentence” (indicating how challenging it is for an AI model to predict that sentence).
Suggested practical considerations for AI-assisted medical manuscript writing include the following:
-
Outline first: Structure the manuscript before using AI tools.
-
Uniqueness: The novel points in the manuscript should be kept in mind before reviewing AI-generated contents.
-
Expert oversight: The authors (experts of the subject) must review and refine AI-generated content, by removing the repetitive contents and addition of unique points and other relevant scientific information.
-
Limit iteration: Stop iterative prompting when the draft is detailed enough; focus on revising for uniqueness and eliminating redundancy.
-
Human touch needed: Be cautious with AI in literature reviews and logical flow editing to retain human nuances.
#
Clinical Importance of AI in Medical Manuscript
While AI is being increasingly used in manuscript writing and research process, its main importance comes in its tangible use to improve patient care. The cornerstone of medical practice is evidence-based medicine, which is enhanced by ongoing clinical trials. Patient care is directly affected by the robustness and accuracy of clinical trials. Evolving AI tools can be used to increase the accuracy and inclusiveness of these clinical trials. For example, the Food and Drug Administration (FDA) is using digital health technologies (DHTs) in clinical trials, which enables continuous monitoring of patients' health through real-time data collection, facilitating timely interventions and personalized treatment plans.[29] They increase access to clinical trials for underrepresented communities, ensuring diverse representation in research. Additionally, DHTs enhance patient engagement by allowing individuals to track their symptoms and communicate effectively with health care providers.[29]
#
Hidden Flaws of Artificial Intelligence
Skills to solve a complex problem cannot equate to intelligence. AI tools greatly mimic the human skills but extrapolate the data and make false statements, so-called AI hallucinations, which cannot be easily identified if caution is not taken,[30] for example, giving wrong references/filling the missing data with erroneous data and hence giving false results. Its plausible answers do not equate to correct understanding and mastery of the language. Therefore, trust in AI tools, which is the key in medical practices, is still lacking.
This misinformation is another major issue, as inaccurate medical information can pose risks to patients, particularly when presented authoritatively without expertise.[31]
In addition, lack of transparency in how outputs are generated complicates error identification and quality assessment, while reliance on non-peer-reviewed sources raises the risk of misinformation.[31]
Human involvement in identifying these shortcomings is essential, so the concept of automotive AI should have humans in the loop in each step of the research and manuscript writing.[32]
#
How to Cite the AI Tools
In scientific manuscript writing, AI-generated content should be cited and referenced with a high level of transparency. The specific AI tool used must be described, including the exact prompts and responses provided. AI tools should be cited as resources rather than authors, with details on the version and access format similar to software or databases. The suggested template is as follows:
-
APA style:
-
– OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. ChatGPT.
-
– Microsoft. (2024). Copilot [AI tool]. Microsoft Edge. Microsoft Edge.
-
-
MLA or Chicago style:
-
– OpenAI, ChatGPT, Mar 14 version, 2023, ChatGPT.
-
– Microsoft. Copilot. 2024, Microsoft Edge, Microsoft Edge.
-
Current recommendations: The role of the AI model within the study must be described, along with how it fits into the project workflow. The name, version, and developer of the AI model should be specified. The prompts used, as well as any criteria for their selection, should be detailed. The dataset on which the AI model was applied must be described, and a reference to this dataset should be provided. The evaluation methods for the AI model's output should be described, including the metrics used and the evaluators (or software programs) involved. If the performance of the AI model was compared with other methods, the comparison should be given, including the metrics and specifics of each method. Finally, any caveats and biases associated with the AI model that could affect the reproducibility of the study should be addressed.
Perils of detailed citation: Many articles have been published with excessive elaboration on the methodologies. Articles claiming to be true to citation practices tend to enumerate every minutiae along the lines of “I asked ChatGPT x, and it responded with y.” Verging on the realm of a mere product review, these details can inadvertently overshadow the core research narrative, leading to a disinterest that is palpably felt by the reader.[33] As the adage goes, “The devil is in the details,” but when these details become overwhelmingly prominent, they may eclipse the broader intellectual discussion, leaving the manuscript bereft of its dynamism.
Thus, while precision in citation is paramount, a balance should be sought to ensure engaging presentation that ultimately sustains reader engagement and preserves the article's scholarly allure.
#
Prompts and Inputs
Prompts are initial instructions or cues given to the AI to direct its response. They serve to guide the AI's output by specifying the type of information or analysis required. The following points can help in creating effective prompts while writing a manuscript[34]:
-
Clear and explicit definition of the needed information should be provided. Example: “Detail the common MRI characteristics of multiple sclerosis lesions and their development over time.”
-
Parameters should be specified with targeted prompts. For instance, “Compare the imaging features of benign versus malignant breast lesions on mammograms, excluding ultrasound data.”
-
Relevant context and background details should be offered. For example, “This study investigates the effectiveness of low-dose CT for lung cancer screening. Examine the dataset, which includes patient demographics, detection rates, and instances of false positives.”
-
Request Evidence and References. For example, “Discuss the role of contrast-enhanced imaging in identifying hepatic lesions, citing recent studies and clinical guidelines.”
-
Format for responses should be specified. For example, “Describe the radiological features of Crohn's disease using bullet points, covering each imaging modality such as CT, MRI, and ultrasound.”
Inputs refer to additional details or clarifications provided during the ongoing interaction with the AI. They help refine the AI's responses based on additional needs or specific feedback.
-
Provide additional details/context: If the initial output is too broad, you might specify: “Focus on how dynamic contrast-enhanced MRI distinguishes between glioblastomas and metastases.”
-
Seek clarifications: “Can you explain the validation process for the AI algorithms used for automated tumor detection in this study?”
-
Offer feedback: With the feedback, further refinements can be requested. For example, “Reevaluate the relationship between MRI findings and clinical outcomes in rheumatoid arthritis, focusing specifically on joint effusion and synovitis.” Additionally, AI can be invited to ask for more information. For instance, “If you require additional details about patient demographics or specific imaging techniques used in this study, please let me know.”
#
Conclusion
As the field continues to evolve, embracing AI's potential is essential for staying at the forefront of scientific writing. [Fig. 2] offers a comprehensive overview of the tools required at each stage of the manuscript writing process. The key takeaway is clear: to truly master generative AI, there's no substitute for hands-on experience. So, dive in, explore, and let AI be your guide in the ever-expanding world of academic innovation. If you want to learn generative AI, start using AI. It is the gateway to unlocking its full potential.


#
#
Conflict of Interest
None declared.
Acknowledgments
In preparing this manuscript, we made extensive use of several advanced artificial intelligence tools to gain firsthand experience and produce the manuscript of reasonable relevance. We employed ChatGPT (OpenAI, GPT-4) and Microsoft Copilot (Microsoft 365, Version 2024) for drafting, correcting, and refining the language. We also used Elicit (Elicit, Version 2.1) for conducting a literature search relevant to the individual subsections of the article. Additionally, Julius AI (Julius AI, Version 3.0) was used in creating [Fig. 1] that visually represents the hierarchy of the artificial intelligence.
Our use of these AI tools was done to gain direct experience with effective prompting techniques and to understand the merits and demerits of generative AI. This allowed us to explore the individual strengths of each tool. Despite the extensive use of these technologies, all AI-generated content was carefully reviewed and guided by the authors. Furthermore, we conducted a plagiarism check after writing to ensure the manuscript's originality.
Ethical Approval
The study was approved by the ethical review committee of our institute.
-
References
- 1 Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 2023; 15 (02) e35179
- 2 Kore A, Abbasi Bavil E, Subasri V. et al. Empirical data drift detection experiments on real-world medical imaging data. Nat Commun 2024; 15 (01) 1887
- 3 Elali FR, Rachid LN. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns (N Y) 2023; 4 (03) 100706
- 4 Julius AI. Julius AI (Version 3.0) [Diagram generation software]. Julius AI Inc. Accessed September 6, 2024 at: https://www.juliusai.com
- 5 Fink A. Conducting Research Literature Reviews: From the Internet to Paper. 3rd ed. Los Angeles, CA:: Sage Publications;; 2010: 3-5
- 6 Poojary SA, Bagadia JD. Reviewing literature for research: doing it the right way. Indian J Sex Transm Dis AIDS 2014; 35 (02) 85-91
- 7 Kacena MA, Plotkin LI, Fehrenbacher JC. The use of artificial intelligence in writing scientific review articles. Curr Osteoporos Rep 2024; 22 (01) 115-121
- 8 Mozelius P, Humble N. On the use of generative AI for literature reviews: an exploration of tools and techniques. J AI Res 2024; 15 (03) 123-145
- 9 Open AI. . ChatGPT (GPT-4). Accessed September 6, 2024 at: https://www.openai.com/chatgpt
- 10 Microsoft. . Microsoft Copilot (Microsoft 365, Version 2024). Accessed September 6, 2024 at: https://www.microsoft.com/microsoft-365/copilot
- 11 Reichenpfader D, Müller H, Denecke K. A scoping review of large language model based approaches for information extraction from radiology reports. NPJ Digit Med 2024; 7 (01) 222
- 12 Fink MA, Bischoff A, Fink CA. et al. Potential of ChatGPT and GPT-4 for data mining of free-text CT reports on lung cancer. Radiology 2023; 308 (03) e231362
- 13 Le Guellec B, Lefèvre A, Geay C. et al. Performance of an open-source large language model in extracting information from free-text radiology reports. Radiol Artif Intell 2024; 6 (04) e230364
- 14 Hu D, Liu B, Zhu X, Lu X, Wu N. Zero-shot information extraction from radiological reports using ChatGPT. Int J Med Inform 2024; 183: 105321
- 15 Faes L, Sim DA, van Smeden M, Held U, Bossuyt PM, Bachmann LM. Artificial intelligence and statistics: just the old wine in new wineskins?. Front Digit Health 2022; 4: 833912
- 16 Ordak M. ChatGPT's skills in statistical analysis using the example of allergology: do we have reason for concern?. Healthcare (Basel) 2023; 11 (18) 2554
- 17 Ahn S. Data science through natural language with ChatGPT's Code Interpreter. Transl Clin Pharmacol 2024; 32 (02) 73-82
- 18 Gewirtz D. How to use ChatGPT to make charts and tables with advanced data analysis. ZDNET. 2024 . Accessed August 20, 2024 at: https://www.zdnet.com/article/how-to-use-chatgpt-to-make-charts-and-tables-with-advanced-data-analysis/
- 19 Smeds MR, Mendes B, O'Banion LA, Shalhub S. Exploring the pros and cons of using artificial intelligence in manuscript preparation for scientific journals. J Vasc Surg Cases Innov Tech 2023; 9 (02) 101163
- 20 Brożek B, Furman M, Jakubiec M. et al. The black box problem revisited: Real and imaginary challenges for automated legal decision making. Artif Intell Law 2024; 32: 427-440
- 21 Dinerstein v. Google. No. 1:19-cv-04311. 2019.
- 22 Smith J, Johnson L. Ethical concerns in AI: lessons from Dinerstein v. Google. J Tech Ethics 2024; 12 (02) 115-128
- 23 Dave A, Smith J, Lee R. et al. Ethical guidelines for AI systems: human oversight, technical reliability, privacy, and accountability. J AI Ethics 2023; 5 (04) 321-336
- 24 Meszaros Z, Toth I, Kovacs P. et al. The General Data Protection Regulation (GDPR) and its impact on handling personal health data in automated decision-making contexts. Health Data Law 2022; 10 (02) 45-59
- 25 International Committee of Medical Journal Editors. Defining the role of authors and contributors. International Committee of Medical Journal Editors. 2023 . Accessed September 6, 2024 at: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html
- 26 Zielinski A, Patel R, Meyer J. et al. The World Association of Medical Editors' stance on AI authorship: Emphasizing human responsibility in ensuring AI content accuracy. Med Educ 2023; 40 (02) 75-82
- 27 University of Texas Southwestern Medical Center. (n.d.). AI publishing guidelines. UT Southwestern Medical Center. Accessed December 2, 2024 at: https://utsouthwestern.libguides.com/artificial-intelligence/ai-publishing-guidelines
- 28 Pinto DS, Noronha SM, Saigal G, Quencer RM. Comparison of an AI-generated case report with a human-written case report: practical considerations for AI-assisted medical writing. Cureus 2024; 16 (05) e60461
- 29 U.S. Food and Drug Administration. Digital Health Technologies (DHTs) for Drug Development. 2024 . Accessed November 14, 2024 at: https://www.fda.gov/science-research/science-and-research-special-topics/digital-health-technologies-dhts-drug-development
- 30 The New Republic. The great A.I. hallucinations. 2023 . Accessed November 14, 2024 at: https://newrepublic.com/article/172454/great-ai-hallucination-chatgpt
- 31 Doyal AS, Sender D, Nanda M, Serrano RA. ChatGPT and artificial intelligence in medical writing: concerns and ethical considerations. Cureus 2023; 15 (08) e43292
- 32 Lau W, Cerf VG, Enriquez J. et al. Protecting scientific integrity in an age of generative AI. Proc Natl Acad Sci U S A 2024; 121 (22) e2407886121
- 33 Dwivedi YK, Kshetri N, Hughes L. et al. Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int J Inf Manage 2023; 71: 102642
- 34 Smith J, Johnson A, Lee B. et al. Best practices for crafting effective prompts and inputs for AI in manuscript writing. J AI Writing 2024; 8 (02) 150-162
Address for correspondence
Publication History
Article published online:
09 January 2025
© 2025. Indian Radiological Association. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Thieme Medical and Scientific Publishers Pvt. Ltd.
A-12, 2nd Floor, Sector 2, Noida-201301 UP, India
-
References
- 1 Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 2023; 15 (02) e35179
- 2 Kore A, Abbasi Bavil E, Subasri V. et al. Empirical data drift detection experiments on real-world medical imaging data. Nat Commun 2024; 15 (01) 1887
- 3 Elali FR, Rachid LN. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns (N Y) 2023; 4 (03) 100706
- 4 Julius AI. Julius AI (Version 3.0) [Diagram generation software]. Julius AI Inc. Accessed September 6, 2024 at: https://www.juliusai.com
- 5 Fink A. Conducting Research Literature Reviews: From the Internet to Paper. 3rd ed. Los Angeles, CA:: Sage Publications;; 2010: 3-5
- 6 Poojary SA, Bagadia JD. Reviewing literature for research: doing it the right way. Indian J Sex Transm Dis AIDS 2014; 35 (02) 85-91
- 7 Kacena MA, Plotkin LI, Fehrenbacher JC. The use of artificial intelligence in writing scientific review articles. Curr Osteoporos Rep 2024; 22 (01) 115-121
- 8 Mozelius P, Humble N. On the use of generative AI for literature reviews: an exploration of tools and techniques. J AI Res 2024; 15 (03) 123-145
- 9 Open AI. . ChatGPT (GPT-4). Accessed September 6, 2024 at: https://www.openai.com/chatgpt
- 10 Microsoft. . Microsoft Copilot (Microsoft 365, Version 2024). Accessed September 6, 2024 at: https://www.microsoft.com/microsoft-365/copilot
- 11 Reichenpfader D, Müller H, Denecke K. A scoping review of large language model based approaches for information extraction from radiology reports. NPJ Digit Med 2024; 7 (01) 222
- 12 Fink MA, Bischoff A, Fink CA. et al. Potential of ChatGPT and GPT-4 for data mining of free-text CT reports on lung cancer. Radiology 2023; 308 (03) e231362
- 13 Le Guellec B, Lefèvre A, Geay C. et al. Performance of an open-source large language model in extracting information from free-text radiology reports. Radiol Artif Intell 2024; 6 (04) e230364
- 14 Hu D, Liu B, Zhu X, Lu X, Wu N. Zero-shot information extraction from radiological reports using ChatGPT. Int J Med Inform 2024; 183: 105321
- 15 Faes L, Sim DA, van Smeden M, Held U, Bossuyt PM, Bachmann LM. Artificial intelligence and statistics: just the old wine in new wineskins?. Front Digit Health 2022; 4: 833912
- 16 Ordak M. ChatGPT's skills in statistical analysis using the example of allergology: do we have reason for concern?. Healthcare (Basel) 2023; 11 (18) 2554
- 17 Ahn S. Data science through natural language with ChatGPT's Code Interpreter. Transl Clin Pharmacol 2024; 32 (02) 73-82
- 18 Gewirtz D. How to use ChatGPT to make charts and tables with advanced data analysis. ZDNET. 2024 . Accessed August 20, 2024 at: https://www.zdnet.com/article/how-to-use-chatgpt-to-make-charts-and-tables-with-advanced-data-analysis/
- 19 Smeds MR, Mendes B, O'Banion LA, Shalhub S. Exploring the pros and cons of using artificial intelligence in manuscript preparation for scientific journals. J Vasc Surg Cases Innov Tech 2023; 9 (02) 101163
- 20 Brożek B, Furman M, Jakubiec M. et al. The black box problem revisited: Real and imaginary challenges for automated legal decision making. Artif Intell Law 2024; 32: 427-440
- 21 Dinerstein v. Google. No. 1:19-cv-04311. 2019.
- 22 Smith J, Johnson L. Ethical concerns in AI: lessons from Dinerstein v. Google. J Tech Ethics 2024; 12 (02) 115-128
- 23 Dave A, Smith J, Lee R. et al. Ethical guidelines for AI systems: human oversight, technical reliability, privacy, and accountability. J AI Ethics 2023; 5 (04) 321-336
- 24 Meszaros Z, Toth I, Kovacs P. et al. The General Data Protection Regulation (GDPR) and its impact on handling personal health data in automated decision-making contexts. Health Data Law 2022; 10 (02) 45-59
- 25 International Committee of Medical Journal Editors. Defining the role of authors and contributors. International Committee of Medical Journal Editors. 2023 . Accessed September 6, 2024 at: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html
- 26 Zielinski A, Patel R, Meyer J. et al. The World Association of Medical Editors' stance on AI authorship: Emphasizing human responsibility in ensuring AI content accuracy. Med Educ 2023; 40 (02) 75-82
- 27 University of Texas Southwestern Medical Center. (n.d.). AI publishing guidelines. UT Southwestern Medical Center. Accessed December 2, 2024 at: https://utsouthwestern.libguides.com/artificial-intelligence/ai-publishing-guidelines
- 28 Pinto DS, Noronha SM, Saigal G, Quencer RM. Comparison of an AI-generated case report with a human-written case report: practical considerations for AI-assisted medical writing. Cureus 2024; 16 (05) e60461
- 29 U.S. Food and Drug Administration. Digital Health Technologies (DHTs) for Drug Development. 2024 . Accessed November 14, 2024 at: https://www.fda.gov/science-research/science-and-research-special-topics/digital-health-technologies-dhts-drug-development
- 30 The New Republic. The great A.I. hallucinations. 2023 . Accessed November 14, 2024 at: https://newrepublic.com/article/172454/great-ai-hallucination-chatgpt
- 31 Doyal AS, Sender D, Nanda M, Serrano RA. ChatGPT and artificial intelligence in medical writing: concerns and ethical considerations. Cureus 2023; 15 (08) e43292
- 32 Lau W, Cerf VG, Enriquez J. et al. Protecting scientific integrity in an age of generative AI. Proc Natl Acad Sci U S A 2024; 121 (22) e2407886121
- 33 Dwivedi YK, Kshetri N, Hughes L. et al. Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int J Inf Manage 2023; 71: 102642
- 34 Smith J, Johnson A, Lee B. et al. Best practices for crafting effective prompts and inputs for AI in manuscript writing. J AI Writing 2024; 8 (02) 150-162



