Subscribe to RSS

DOI: 10.1055/s-0045-1805062
Beyond the Monotonous Discussion of ChatGPT Use in Academic Writing: Expectations for Sleep Science Researchers
The paper by Cavalcante-Silva et al.[1] is highly significant. They have addressed the merits and demerits of generative artificial intelligence (AI), such as ChatGPT (OpenAI Inc., San Francisco, CA, USA), in academic writing. Since ChatGPT's introduction, I have consistently reviewed papers focused on this topic.
While researchers who emphasize the merits of ChatGPT claim that its use in writing is time-saving and can lower the language barrier for non-native English authors, those who focus on its demerits point out issues such as ChatGPT's inaccuracies and the risk of unintentional plagiarism.[2] Studies on ChatGPT in writing have focused on specific specialties, for example, gynecologists focused on ChatGPT-writing of gynecologic conditions, and psychiatrists covered psychiatric conditions. However, there are only a few truly specialty-specific issues in ChatGPT writing, leading to similar contexts and findings across different fields. Discussions about detecting ChatGPT-generated manuscripts and the ethical implications of its use further complicate the debate. The experiments conducted so far are monotonous: “input this and then output that”, resulting in repetitive discussions.
Cavalcante-Silva et al. have moved beyond such monotonous discussions. By stepping out of the conventional framework, they made important statements that may only be fully appreciated by those with deep knowledge of the subject. I would like to highlight these points and offer some additional thoughts. First, they insightfully noted that “the ability to ask the right question and give the correct command may be the human skill that will be most valued in the AI era”. Secondly, they raised concerns that ChatGPT-writing might impact our ability to read, comprehend information, and critically analyze it. Third, they argued that clear rules and objectives must be established for the use of AI, including ChatGPT.
First, as previously reported, ChatGPT can generate readable manuscripts, including letters[3] [4] and case reports.[5] The key is that better inputs lead to better outputs.[5] For instance, some researchers have tried to generate abstracts by inputting only the paper title.[6] However, neither humans nor ChatGPT can produce a “good” abstract from a title alone. Crafting a detailed and effective input for manuscript writing requires human effort and skilled experience. Completing this task typically leads to manuscript completion, so this use of ChatGPT can be seen as “human first, ChatGPT as a helper”. Skilled writers who can create complete prompts may prefer to write themselves, finding it simpler and more effective. In this context, ChatGPT's role is similar to the transition from handwriting to word processing we experienced 30 years ago. Given these points, I fully agree with Cavalcante-Silva et al.'s statement.
The second point is the most crucial: it concerns the future impact of ChatGPT on human writing, thinking, and cognitive abilities. Historically, the introduction of new technologies has often prompted similar concerns. For example, automobiles were thought to reduce human physical activity, and personal computers (PCs), by replacing handwriting, were expected to diminish our spelling memory. Despite these worries, both automobiles and PCs have become indispensable without causing notable inconvenience. The question now is whether AI and ChatGPT will have a similar degree of impact. This issue may be considered individually, but I believe that ordinary physicians may not reach a definitive answer. Therefore, I hope that brain scientists, including those in sleep science, will provide objective data—such as differences in brain activity between self-writing and ChatGPT-assisted writing. For example, as cited by Cavalcante-Silva et al.,[1] nighttime exposure to short-wavelength blue light from smartphone screens has been shown to alter circadian rhythms,[7] potentially affecting long-term mental conditions. Similarly, ChatGPT-assisted writing might impact mental or cognitive conditions over time. Accumulating such data will help us scientifically predict the effects of ChatGPT on human cognition.
The third point concerns the regulation of ChatGPT use in academic writing. This issue is closely tied to the answer to the second point. If the “safety” of ChatGPT use is scientifically validated, the current “soft” regulation requiring the declaration of ChatGPT use may remain reasonable. However, if any harmful effects are suggested, let alone proven, stricter regulations should be considered. We must lean towards caution because the potential impact of ChatGPT on human writing and thinking abilities is groundbreaking.
The latest smartphones are equipped with generative AI, indicating that this technology has already permeated everyone's daily life. Some authors may be tempted to rely heavily on ChatGPT for paper writing. However, let us consider a hypothetical statement: “The use of ChatGPT in academic writing is believed to deteriorate the writing and thinking abilities of future generations and should thus be avoided except as a linguistic checker”. This would serve as a warning similar to “global warming”. I believe that doctors, concerned with not only current but also future health and welfare, will adhere to such regulations once they are established. A detector of ChatGPT-generated manuscript and the ethical matter of ChatGPT-writing will naturally affect such regulations: I herein did not touch on these topics to simplify the story.
If such a statement is proven incorrect and ChatGPT use in writing is deemed safe, then the aforementioned regulation should be abolished, and new regulations should be established. We should always err on the side of caution. One might ask, “Why did not earlier generations regulate its use more strictly?”. We could end up facing criticism from future generations, which might be a “nightmare” for us. I hope that sleep science researchers will provide new insights into this issue.
#
Conflict of Interests
The author has no conflict of interests to declare.
-
References
- 1 Cavalcante-Silva V, D'Almeida V, Tufik S, Andersen ML. Artificial intelligence, the production of scientific texts, and the implications for sleep science: Exploring emerging paradigms and perspectives. Sleep Sci 2024; 17 (03) e322-e324
- 2 Altmäe S, Sola-Leyva A, Salumets A. Artificial intelligence in scientific writing: a friend or a foe?. Reprod Biomed Online 2023; 47 (01) 3-9
- 3 Matsubara S. Comparing letters written by humans and ChatGPT: A preliminary study. Int J Gynaecol Obstet 2025; 168 (01) 320-325
- 4 Matsubara S. Letters generated by ChatGPT: Author who?. J Obstet Gynaecol Res 2024; 50 (07) 1250-1252
- 5 Matsubara S. Humans-written versus ChatGPT-generated case reports. J Obstet Gynaecol Res 2024; 50 (10) 1995-1999
- 6 Stadler RD, Sudah SY, Moverman MA. et al. Identification of ChatGPT-generated abstracts within shoulder and elbow surgery poses a challenge for reviewers. Arthroscopy 2024; •••: S0749-8063 (24)00495-X
- 7 Ostrin LA, Abbott KS, Queener HM. Attenuation of short wavelengths alters sleep and the ipRGC pupil response. Ophthalmic Physiol Opt 2017; 37 (04) 440-450
Address for correspondence
Publication History
Received: 17 September 2024
Accepted: 21 January 2025
Article published online:
01 April 2025
© 2025. Brazilian Sleep Academy. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Thieme Revinter Publicações Ltda.
Rua Rego Freitas, 175, loja 1, República, São Paulo, SP, CEP 01220-010, Brazil
Shigeki Matsubara. Beyond the Monotonous Discussion of ChatGPT Use in Academic Writing: Expectations for Sleep Science Researchers. Sleep Sci ; : s00451805062.
DOI: 10.1055/s-0045-1805062
-
References
- 1 Cavalcante-Silva V, D'Almeida V, Tufik S, Andersen ML. Artificial intelligence, the production of scientific texts, and the implications for sleep science: Exploring emerging paradigms and perspectives. Sleep Sci 2024; 17 (03) e322-e324
- 2 Altmäe S, Sola-Leyva A, Salumets A. Artificial intelligence in scientific writing: a friend or a foe?. Reprod Biomed Online 2023; 47 (01) 3-9
- 3 Matsubara S. Comparing letters written by humans and ChatGPT: A preliminary study. Int J Gynaecol Obstet 2025; 168 (01) 320-325
- 4 Matsubara S. Letters generated by ChatGPT: Author who?. J Obstet Gynaecol Res 2024; 50 (07) 1250-1252
- 5 Matsubara S. Humans-written versus ChatGPT-generated case reports. J Obstet Gynaecol Res 2024; 50 (10) 1995-1999
- 6 Stadler RD, Sudah SY, Moverman MA. et al. Identification of ChatGPT-generated abstracts within shoulder and elbow surgery poses a challenge for reviewers. Arthroscopy 2024; •••: S0749-8063 (24)00495-X
- 7 Ostrin LA, Abbott KS, Queener HM. Attenuation of short wavelengths alters sleep and the ipRGC pupil response. Ophthalmic Physiol Opt 2017; 37 (04) 440-450