Subscribe to RSS
DOI: 10.1055/s-0041-1740565
Human Versus Machine: How Do We Know Who Is Winning? ROC Analysis for Comparing Human and Machine Performance under Varying Cost-Prevalence Assumptions
Abstract
Background Receiver operating characteristic (ROC) analysis is commonly used for comparing models and humans; however, the exact analytical techniques vary and some are flawed.
Objectives The aim of the study is to identify common flaws in ROC analysis for human versus model performance, and address them.
Methods We review current use and identify common errors. We also review the ROC analysis literature for more appropriate techniques.
Results We identify concerns in three techniques: (1) using mean human sensitivity and specificity; (2) assuming humans can be approximated by ROCs; and (3) matching sensitivity and specificity. We identify a technique from Provost et al using dominance tables and cost-prevalence gradients that can be adapted to address these concerns.
Conclusion Dominance tables and cost-prevalence gradients provide far greater detail when comparing performances of models and humans, and address common failings in other approaches. This should be the standard method for such analyses moving forward.
Publication History
Received: 04 August 2021
Accepted: 19 October 2021
Article published online:
31 December 2021
© 2021. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Andersson S, Heijl A, Bizios D, Bengtsson B. Comparison of clinicians and an artificial neural network regarding accuracy and certainty in performance of visual field assessment for the diagnosis of glaucoma. Acta Ophthalmol 2013; 91 (05) 413-417
- 2 Steiner DF, MacDonald R, Liu Y. et al. Impact of deep learning assistance on the histopathologic review of lymph nodes for metastatic breast cancer. Am J Surg Pathol 2018; 42 (12) 1636-1646
- 3 Liu Y, Gadepalli K, Norouzi M. et al. Detecting cancer metastases on gigapixel pathology images. arXiv 2017; 1-13
- 4 Mueller M, Almeida JS, Stanislaus R, Wagner CL. Can machine learning methods predict extubation outcome in premature infants as well as clinicians?. J Neonatal Biol 2013; 2: 1000118
- 5 Haenssle HA, Fink C, Schneiderbauer R. et al; Reader study level-I and level-II Groups. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol 2018; 29 (08) 1836-1842
- 6 Esteva A, Kuprel B, Novoa RA. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017; 542 (7639): 115-118
- 7 Gulshan V, Peng L, Coram M. et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016; 316 (22) 2402-2410
- 8 Hinton G. Deep learning—a technology with the potential to transform health care. JAMA 2018; 320 (11) 1101-1102
- 9 Stead WW. Clinical implications and challenges of artificial intelligence and deep learning. JAMA 2018; 320 (11) 1107-1108
- 10 Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng 2018; 2 (10) 719-731
- 11 de Berg M, van Kreveld M, Overmars M, Schwarzkopf O. Computational geometry. Berlin, Heidelberg: Springer Berlin Heidelberg; 1997: 1-17
- 12 Provost F, Fawcett T, Kohavi R. The Case Against Accuracy Estimation for Comparing Induction Algorithms. Paper presented at: Proceedings of the Fifteenth International Conference on Machine Learning (IMLC-98), Madison, WI, 1998
- 13 Fawcett T. An introduction to ROC analysis. Pattern Recognit Lett 2006; 27: 861-874