Rofo 2024; 196(02): 154-162
DOI: 10.1055/a-2124-1958
Review

Closing the loop for AI-ready radiology

Die Zukunft der Radiologie: Vertikale Integration und KI im Einklang
Moritz Fuchs
1   Informatics, TU Darmstadt, Germany
,
Camila Gonzalez
1   Informatics, TU Darmstadt, Germany
,
Yannik Frisch
1   Informatics, TU Darmstadt, Germany
,
Paul Hahn
1   Informatics, TU Darmstadt, Germany
,
Philipp Matthies
2   AI, Smart Reporting GmbH, München, Germany
,
Maximilian Gruening
3   Interorganisational Informationssystems, Georg-August-Universität Göttingen, Goettingen, Germany
,
4   Institute for Diagnostic and Interventional Radiology, Uniklinik Koln, Germany
5   Institute for Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt am Main, Germany
,
Thomas Dratsch
4   Institute for Diagnostic and Interventional Radiology, Uniklinik Koln, Germany
,
Moon Kim
6   Institute for Diagnostic and Interventional Radiology and Neuroradiology, Universitätsklinikum Essen, Germany
7   Institute for Artificial Intelligence in Medicine, Universitätsklinikum Essen, Germany
,
Felix Nensa
6   Institute for Diagnostic and Interventional Radiology and Neuroradiology, Universitätsklinikum Essen, Germany
7   Institute for Artificial Intelligence in Medicine, Universitätsklinikum Essen, Germany
,
Manuel Trenz
3   Interorganisational Informationssystems, Georg-August-Universität Göttingen, Goettingen, Germany
,
1   Informatics, TU Darmstadt, Germany
› Institutsangaben
Gefördert durch: Bundesministerium für Gesundheit EVA-KI [ZMVI1-2520DAT03A]

Abstract

Background In recent years, AI has made significant advancements in medical diagnosis and prognosis. However, the incorporation of AI into clinical practice is still challenging and under-appreciated. We aim to demonstrate a possible vertical integration approach to close the loop for AI-ready radiology.

Method This study highlights the importance of two-way communication for AI-assisted radiology. As a key part of the methodology, it demonstrates the integration of AI systems into clinical practice with structured reports and AI visualization, giving more insight into the AI system. By integrating cooperative lifelong learning into the AI system, we ensure the long-term effectiveness of the AI system, while keeping the radiologist in the loop. 

Results We demonstrate the use of lifelong learning for AI systems by incorporating AI visualization and structured reports. We evaluate Memory Aware-Synapses and Rehearsal approach and find that both approaches work in practice. Furthermore, we see the advantage of lifelong learning algorithms that do not require the storing or maintaining of samples from previous datasets.

Conclusion In conclusion, incorporating AI into the clinical routine of radiology requires a two-way communication approach and seamless integration of the AI system, which we achieve with structured reports and visualization of the insight gained by the model. Closing the loop for radiology leads to successful integration, enabling lifelong learning for the AI system, which is crucial for sustainable long-term performance.

Key Points:

  • The integration of AI systems into the clinical routine with structured reports and AI visualization.

  • Two-way communication between AI and radiologists is necessary to enable AI that keeps the radiologist in the loop.

  • Closing the loop enables lifelong learning, which is crucial for long-term, high-performing AI in radiology.

Zusammenfassung

Hintergrund In den letzten Jahren hat die KI erhebliche Fortschritte bei der medizinischen Diagnose und Prognose erzielt. Jedoch bleibt die Integration von KI in die klinische Praxis eine Herausforderung und wird nicht ausreichend gewürdigt. Wir wollen einen möglichen vertikalen Integrationsansatz aufzeigen, um den Kreislauf für eine KI-kompatible Radiologie zu schließen.

Methode Diese Studie unterstreicht die Bedeutung der wechselseitigen Kommunikation für die KI-gestützte Radiologie. Darüber hinaus wird als wesentlicher Teil der Methodik die Integration des KI-Systems mit strukturierten Berichten und KI-Visualisierungen in die klinische Praxis demonstriert. Durch die Integration von lebenslangem Lernen stellen wir die langfristige Effektivität der KI sicher und halten gleichzeitig den Radiologen auf dem Laufenden.

Ergebnisse Wir demonstrieren den Einsatz von lifelong learning für KI-Systeme durch die Einbeziehung von KI-Visualisierungen und strukturierten Befunden. Wir evaluieren Memory Aware-Synapses und Rehearsal-Methoden und zeigen in der Praxis, dass beide funktionieren. Wir sehen vor allem Vorteile von Algorithmen für lifelong learning, wie Memory Aware-Synapses, wenn sie keine Muster aus früheren Datensätzen speichern oder verwalten müssen.

Schlussfolgerung Die Einbindung von KI in die klinische Routine von Radiologen erfordert einen zweiseitigen Kommunikationsansatz und eine nahtlose Integration des KI-Systems mit strukturierten Berichten und KI-Visualisierungen, die Erkenntnisse des KI-Models repräsentieren. Die erfolgreiche Integration führt zu einem Kreislaufsystem mit Radiologen, das lebenslanges Lernen für KI-Systeme ermöglicht, was für die langfristige und nachhaltige Leistungsfähigkeit entscheidend ist.

Kernaussagen:

  • Wir demonstrieren die Integration von KI-Systemen in klinische Routinen mit strukturierten Berichten und KI-Visualisierungen.

  • Eine bi-direktionale Kommunikation zwischen KI und Radiologen ist notwendig, um KI im radiologischen Alltag zu ermöglichen.

  • Der vorgestellte Kreislauf ermöglicht lebenslanges Lernen, was für eine langfristige, leistungsstarke KI in der Radiologie entscheidend ist.

Zitierweise

  • Fuchs M, Gonzalez C, Frisch Y et al. Closing the loop for AI-ready radiology. Fortschr Röntgenstr 2024; 196: 154 – 162



Publikationsverlauf

Eingereicht: 27. Februar 2023

Angenommen: 01. Juli 2023

Artikel online veröffentlicht:
15. August 2023

© 2023. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

 
  • References

  • 1 FDA. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. 2021 Im Internet (Stand: 02.02.2023): https://www.fda.gov/media/145022/download
  • 2 Benjamin M, Aisen A, Benjamin E. Accelerating development and clinical deployment of diagnostic imaging artificial intelligence. Journal of the American College of Radiology 2021; 18: 1514-1516
  • 3 Zhang J, Chao H, Dasegowda G. et al. Overlooked Trustworthiness of Saliency Maps. In: Springer; 2022: 451-461
  • 4 LaRosa E, Danks D. Impacts on trust of healthcare AI. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. New Orleans LA USA: ACM; 2018: 210-215
  • 5 Hatherley JJ. Limits of trust in medical AI. Journal of medical ethics 2020; 46: 478-481
  • 6 Samarasinghe G, Jameson M, Vinod S. et al. Deep learning for segmentation in radiation therapy planning: a review. Journal of Medical Imaging and Radiation Oncology 2021; 65: 578-595
  • 7 Nazir M, Shakil S, Khurshid K. Role of deep learning in brain tumor detection and classification (2015 to 2020): A review. Computerized Medical Imaging and Graphics 2021; 91: 101940
  • 8 Zhou SK, Greenspan H, Davatzikos C. et al. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. Proceedings of the IEEE 2021; 109: 820-838
  • 9 Sanner A, Gonzalez C, Mukhopadhyay A. How reliable are out-of-distribution generalization methods for medical image segmentation?. In: Pattern Recognition: 43rd DAGM German Conference, DAGM GCPR 2021, Bonn, Germany, September 28–October 1, 2021, Proceedings. Springer; 2022: 604-617
  • 10 Perkonigg M, Hofmanninger J, Herold Christian J. et al. Dynamic memory to alleviate catastrophic forgetting in continual learning with medical imaging. Nat Commun 2021; 12: 5678
  • 11 Fuchs M, Gonzalez C, Mukhopadhyay A. Practical uncertainty quantification for brain tumor segmentation. In: International Conference on Medical Imaging with Deep Learning. PMLR; 2022: 407-422
  • 12 Jospin LV, Laga H, Boussaid F. et al. Hands-on Bayesian Neural Networks – a Tutorial for Deep Learning Users. IEEE Comput Intell Mag 2022; 17: 29-48
  • 13 Elskhawy A, Lisowska A, Keicher M. et al. Continual Class Incremental Learning for CT Thoracic Segmentation. In: Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning: Second MICCAI Workshop, DART 2020, and First MICCAI Workshop, DCL 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4--8, 2020, Proceedings 2. Springer; 2020: 106-116
  • 14 Mundt M, Lang S, Delfosse Q. et al. CLEVA-compass: A continual learning evaluation assessment compass to promote research transparency and comparability. arXiv preprint arXiv:211003331. 2021
  • 15 EU. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). 2016
  • 16 Gonzalez C, Gotkowski K, Bucher A. et al. Detecting when pre-trained nnu-net models fail silently for covid-19 lung lesion segmentation. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VII 24. Springer; 2021: 304-314
  • 17 Jussupow E, Spohrer K, Heinzl A. et al. Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Information Systems Research 2021; 32: 713-735
  • 18 Arun N, Gaw N, Singh P. et al. Assessing the trustworthiness of saliency maps for localizing abnormalities in medical imaging. Radiology: Artificial Intelligence 2021; 3: e200267
  • 19 Adebayo J, Gilmer J, Muelly M. et al. Sanity checks for saliency maps. Advances in neural information processing systems. 2018: 31
  • 20 Kim B, Seo J, Jeon S. et al. Why are saliency maps noisy? cause of and solution to noisy saliency maps. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE; 2019: 4149-4157
  • 21 Alqaraawi A, Schuessler M, Weiß P. et al. Evaluating saliency map explanations for convolutional neural networks: a user study. In: Proceedings of the 25th International Conference on Intelligent User Interfaces. 2020: 275-285
  • 22 Colak E, Kitamura FC, Hobbs SB. et al. The RSNA pulmonary embolism CT dataset. Radiology: Artificial Intelligence 2021; 3: e200254
  • 23 XU G. RSNA STR Pulmonary Embolism Detection: 1st place solution with code. RSNA STR Pulmonary Embolism Detection: 1st place solution with code 2020. Im Internet (Stand: 01.02.2022): https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145?focusReplyOnRender=true
  • 24 Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7132-7141
  • 25 Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation 1997; 9: 1735-1780
  • 26 Szeliski R. Computer vision: algorithms and applications. Springer Nature; 2022
  • 27 Gotkowski K, Gonzalez C, Bucher A. et al. M3d-CAM: A PyTorch library to generate 3D attention maps for medical deep learning. Bildverarbeitung für die Medizin 2021: Proceedings, German Workshop on Medical Image Computing, Regensburg, March 7-9, 2021. 2021: 217-222
  • 28 Chattopadhay A, Sarkar A, Howlader P. et al. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE winter conference on applications of computer vision (WACV). IEEE; 2018: 839-847
  • 29 Sabel BO, Plum JL, Kneidinger N. et al. Structured reporting of CT examinations in acute pulmonary embolism. Journal of Cardiovascular Computed Tomography 2017; 11: 188-195
  • 30 Gilman MD, Kazerooni EA. Standardized Reporting of CT Pulmonary Angiography for Acute Pulmonary Embolism. Individual or Group PQI. 2015
  • 31 Ziegler E, Urban T, Brown D. et al. Open Health Imaging Foundation Viewer: An Extensible Open-Source Framework for Building Web-Based Imaging Applications to Support Cancer Research. JCO Clinical Cancer Informatics 2020; 336--345
  • 32 Schneider U, Pedroni E, Lomax A. The calibration of CT Hounsfield units for radiotherapy treatment planning. Physics in Medicine & Biology 1996; 41: 111-124
  • 33 Rebuffi S-A, Kolesnikov A, Sperl G. et al. iCaRL: Incremental Classifier and Representation Learning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI: IEEE; 2017: 5533-5542
  • 34 Aljundi R, Babiloni F, Elhoseiny M. et al. Memory Aware Synapses: Learning What (not) to Forget. In: Ferrari V, Hebert M, Sminchisescu C. et al. Hrsg. Computer Vision – ECCV 2018. Cham: Springer International Publishing; 2018: 144-161
  • 35 Vokinger KN, Feuerriegel S, Kesselheim AS. Continual learning in medical devices: FDA’s action plan and beyond. The Lancet Digital Health 2021; 3: e337-e338
  • 36 Vokinger KN, Gasser U. Regulating AI in medicine in the United States and Europe. Nature machine intelligence 2021; 3: 738-739
  • 37 FDA Developing a software precertification program: A working model. Pre-Cert Working Model Version 10. 2019 Im Internet (Stand: 02.02.2023): https://www.fda.gov/media/119722/download
  • 38 Bartlett VL, Dhruva SS, Shah ND. et al. Clinical studies sponsored by digital health companies participating in the FDA’s Precertification Pilot Program: A cross-sectional analysis. Clinical Trials 2022; 19: 119-122
  • 39 FDA. Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions. Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions – Draft Guidance for Industry and Food and Drug Administration Staff 2023. Im Internet (Stand: 02.05.2023): https://www.fda.gov/media/166704/download