Subscribe to RSS
DOI: 10.1055/a-1900-7351
Medical Text Prediction and Suggestion Using Generative Pretrained Transformer Models with Dental Medical Notes
Funding This study was supported by Award Number UL1TR002733 from the National Center for Advancing Translational Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Advancing Translational Sciences or the National Institutes of Health.Abstract
Background Generative pretrained transformer (GPT) models are one of the latest large pretrained natural language processing models that enables model training with limited datasets and reduces dependency on large datasets, which are scarce and costly to establish and maintain. There is a rising interest to explore the use of GPT models in health care.
Objective We investigate the performance of GPT-2 and GPT-Neo models for medical text prediction using 374,787 free-text dental notes.
Methods We fine-tune pretrained GPT-2 and GPT-Neo models for next word prediction on a dataset of over 374,000 manually written sections of dental clinical notes. Each model was trained on 80% of the dataset, validated on 10%, and tested on the remaining 10%. We report model performance in terms of next word prediction accuracy and loss. Additionally, we analyze the performance of the models on different types of prediction tokens for categories. For comparison, we also fine-tuned a non-GPT pretrained neural network model, XLNet (large), for next word prediction. We annotate each token in 100 randomly sampled notes by category (e.g., names, abbreviations, clinical terms, punctuation, etc.) and compare the performance of each model by token category.
Results Models present acceptable accuracy scores (GPT-2: 76%; GPT-Neo: 53%), and the GPT-2 model also performs better in manual evaluations, especially for names, abbreviations, and punctuation. Both GPT models outperformed XLNet in terms of accuracy. The results suggest that pretrained models have the potential to assist medical charting in the future. We share the lessons learned, insights, and suggestions for future implementations.
Conclusion The results suggest that pretrained models have the potential to assist medical charting in the future. Our study presented one of the first implementations of the GPT model used with medical notes.
Keywords
natural language processing - generative pretrained transformer - text prediction - electronic medical recordsEthical Approval
This study is approved by the Institutional Review Board (IRB) of Nationwide Children's Hospital (IRB No: 00000877).
Author Contributions
S. L. L. conceived the idea. All authors contributed to the design of the study. J. S. designed the experiments and conducted the analysis. D. C. supported retrieving the dataset. S. L. L. and D. C. supervised all parts of the study. J. S. and E. S. drafted the manuscript. All authors contributed to the manuscript and approved the final version of it.
Data Availability
The datasets used in this study include private and sensitive information (e.g., medical records, personal health information) which cannot be shared publicly. Please contact the corresponding author for your inquiries.
Publication History
Received: 08 March 2022
Accepted: 11 July 2022
Accepted Manuscript online:
14 July 2022
Article published online:
15 November 2022
© 2022. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Vaswani A, Shazeer N, Parmar N. et al. Attention is all you need. Paper presented at: 31st Conference on Neural Information Processing Systems; December 4, 2017; Long Beach, CA
- 2 Liu J, Shen D, Zhang Y, Dolan B, Carin L, Chen W. What makes good in-context examples for GPT-3?. Accessed February 1, 2022, at: http://arxiv.org/abs/2101.06804
- 3 Brown T, Mann B, Ryder N. et al. Language models are few-shot learners. Paper presented at: Conference on Neural Information Processing Systems; December 6, 2020; Virtual
- 4 Korngiebel DM, Mooney SD. Considering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery. NPJ Digit Med 2021; 4 (01) 93
- 5 Sezgin E, Sirrianni J, Linwood SL. Operationalizing and implementing pretrained, large artificial intelligence linguistic models in the US health care system: outlook of generative pretrained transformer 3 (GPT-3) as a service model. JMIR Med Inform 2022; 10 (02) e32875
- 6 Li J, Zhou Y, Jiang X. et al. Are synthetic clinical notes useful for real natural language processing tasks: a case study on clinical entity recognition. J Am Med Inform Assoc 2021; 28 (10) 2193-2201
- 7 Shieh ATW, Chuang YS, Su SY, Chen YN. Towards understanding of medical randomized controlled trials by conclusion generation. Paper presented at: Tenth International Workshop on Health Text Mining and Information Analysis; November 3, 2019; Hong Kong
- 8 Moramarco F, Juric D, Savkov A. et al. Towards more patient friendly clinical notes through language models and ontologies. Paper presented at: AMIA Annual Symposium; October 30, 2021; San Diego, CA
- 9 Langston J. New Azure OpenAI Service combines access to powerful GPT-3 language models with Azure's enterprise capabilities. The AI Blog. 2021. Accessed February 13, 2022, at: https://blogs.microsoft.com/ai/new-azure-openai-service/
- 10 Robertson SL, Robinson MD, Reid A. Electronic health record effects on work-life balance and burnout within the I3 population collaborative. J Grad Med Educ 2017; 9 (04) 479-484
- 11 Overhage JM, McCallie Jr D. Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern Med 2020; 172 (03) 169-174
- 12 Solaiman I, Clark J, Brundage M. GPT-2: 1.5B release. OpenAI. 2019. Accessed February 6, 2022 at: https://openai.com/blog/gpt-2-1-5b-release/
- 13 EleutherAI/gpt-neo-1.3B. Hugging Face. 2021. Accessed February 6, 2022 at: https://huggingface.co/EleutherAI/gpt-neo-1.3B
- 14 Kuchling AM. Regular Expression HOWTO—Python 3.10.2 documentation. Python. 2022. Accessed February 8, 2022, at: https://docs.python.org/3/howto/regex.html
- 15 Radford A, Wu J, Child R. et al. Language models are unsupervised multitask learners. OpenAI blog. 2019. Accessed February 8, 2022, at: http://www.persagen.com/files/misc/radford2019language.pdf
- 16 Black S, Leo G, Wang P, Leahy C, Biderman S. GPT-Neo: large scale autoregressive language modeling with mesh-tensorflow. Zenodo 2021 Accessed February 8, 2022, at: https://doi.org/10.5281/zenodo.5551208
- 17 Gao L, Biderman S, Black S. et al. The Pile: an 800gb dataset of diverse text for language modeling. Accessed February 8, 2022, at: http://arxiv.org/abs/2101.00027
- 18 Wolf T, Debut L, Sanh V. et al. Transformers: state-of-the-art natural language processing. Paper presented at: The 2020 Conference on Empirical Methods in Natural Language Processing; November 16, 2020; Virtual
- 19 Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov RR, Le QV. XLNet: generalized autoregressive pretraining for language understanding. Paper presented at: Conference on Neural Information Processing Systems; December 8, 2019; Vancouver
- 20 Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. Accessed February 10, 2022, at: http://arxiv.org/abs/1810.04805
- 21 Alsentzer E, Murphy J, Boag W. et al. Publicly available clinical BERT embeddings. Paper presented at: the 2nd Clinical Natural Language Processing Workshop; June 7, 2019; Minneapolis, MN
- 22 Brownlee J. Loss and loss functions for training deep learning neural networks. Machine Learning Mastery. 2019. Accessed February 10, 2022, at: https://machinelearningmastery.com/loss-and-loss-functions-for-training-deep-learning-neural-networks/
- 23 Overfit and underfit. TensorFlow. 2022. Accessed February 7, 2022, Accessed February 7, 2022 at: https://www.tensorflow.org/tutorials/keras/overfit_and_underfit
- 24 Van H, Kauchak D, Leroy G. AutoMeTS: The autocomplete for medical text simplification. Paper presented at: 28th International Conference on Computational Linguistics; December 8, 2020; Barcelona