RSS-Feed abonnieren
DOI: 10.1055/a-2271-0799
The radiologist as a physician – artificial intelligence as a way to overcome tension between the patient, technology, and referring physicians – a narrative review
Artikel in mehreren Sprachen: English | deutsch- Background
- Development of radiology
- Artificial intelligence
- Black box problems
- Explainable AI
- Explainable AI: Sources of error, risks, and subsequent adaptation
- Optimization of AI and minimization of potential weaknesses
- Radiology in communication
- Use of time gained as a result of AI
- Summary
- References
Abstract
Background Large volumes of data increasing over time lead to a shortage of radiologistsʼ time. The use of systems based on artificial intelligence (AI) offers opportunities to relieve the burden on radiologists. The AI systems are usually optimized for a radiological area. Radiologists must understand the basic features of its technical function in order to be able to assess the weaknesses and possible errors of the system and use the strengths of the system. This “explainability” creates trust in an AI system and shows its limits.
Method Based on an expanded Medline search for the key words “radiology, artificial intelligence, referring physician interaction, patient interaction, job satisfaction, communication of findings, expectations”, subjective additional relevant articles were considered for this narrative review.
Results The use of AI is well advanced, especially in radiology. The programmer should provide the radiologist with clear explanations as to how the system works. All systems on the market have strengths and weaknesses. Some of the optimizations are unintentionally specific, as they are often adapted too precisely to a certain environment that often does not exist in practice – this is known as “overfitting”. It should also be noted that there are specific weak points in the systems, so-called “adversarial examples”, which lead to fatal misdiagnoses by the AI even though these cannot be visually distinguished from an unremarkable finding by the radiologist. The user must know which diseases the system is trained for, which organ systems are recognized and taken into account by the AI, and, accordingly, which are not properly assessed. This means that the user can and must critically review the results and adjust the findings if necessary. Correctly applied AI can result in a time savings for the radiologist. If he knows how the system works, he only has to spend a short amount of time checking the results. The time saved can be used for communication with patients and referring physicians and thus contribute to higher job satisfaction.
Conclusion Radiology is a constantly evolving specialty with enormous responsibility, as radiologists often make the diagnosis to be treated. AI-supported systems should be used consistently to provide relief and support. Radiologists need to know the strengths, weaknesses, and areas of application of these AI systems in order to save time. The time gained can be used for communication with patients and referring physicians.
Key Points
-
Explainable AI systems help to improve workflow and to save time.
-
The physician must critically review AI results, under consideration of the limitations of the AI.
-
The AI system will only provide useful results if it has been adapted to the data type and data origin.
-
The communicating radiologist interested in the patient is important for the visibility of the discipline.
Citation Format
-
Stueckle CA, Haage P. The radiologist as a physician – artificial intelligence as a way to overcome tension between the patient, technology, and referring physicians – a narrative review. Fortschr Röntgenstr 2024; 196: 1115 – 1123
#
Keywords
diagnostic radiology - patient interaction - deep learning - artificial intelligence - doctor patient relationshipBackground
Radiology is an interface discipline. The main areas of responsibility include the analysis of images and imaging-guided treatment of certain diseases.
As a technical discipline, radiology is continuously undergoing further development. As a result of these further developments, the number of available images is increasing while scan times are decreasing. Many findings are determined in compliance with defined standards. Depending on the type of disease, these are based on scans acquired in defined planes and locations. Radiological diagnosis and intervention are thus increasingly reproducible and less susceptible to error. The continuously increasing number of images and the increasing demand for interpretation mean a greater workload for radiologists.
#
Development of radiology
As a technology-based discipline, there have been many developments in radiology since the discovery of X-rays by Conrad Röntgen in 1895. In particular, the introduction of computed tomography (CT) and magnetic resonance imaging (MRI) were major milestones that changed radiology. The first CT scanners at the start of the 1970 s provided individual images with slice thicknesses of more than 4 cm. The rotation time has become shorter, slice thickness has become smaller, and scanners have become faster. At the start of the CT era, gaps were left in the scan volume in order to ensure sufficient cooling of CT scanners and to save time [1]. As a result of the introduction of spiral CT and subsequently multidetector spiral CT and volume CT, thin-slice 3D datasets have been increasingly acquired ([Fig. 1]). Instead of scan gaps, overlapping slices are acquired today. Therefore, numerous thin-slice reconstructions are the standard today in CT. They can be supplemented with specific reconstruction algorithms and thus be made available in the desired layout for viewing and interpretation. At the same time, the number of patients examined per time unit is increasing. Consequently, the number of patients to be examined per time unit and the number of images to be viewed and interpreted are continuously increasing ([Fig. 1]). This has resulted in a significant increase in the workload for radiologists. Moreover, examinations have increased not only in number but also in complexity. In addition to morphological images, functional and dynamic evaluations and diffusion maps are increasingly created. The amount of data that radiologists must process promptly, precisely, and in a targeted manner is thus further increasing. As a result of the increasing workload, greater dissatisfaction, an increase in the number of cases of burnout, and early retirement have been seen among radiologists [2]. Modern radiology is therefore currently confronted with four major challenges: large amounts of image material to be interpreted (big data), high demand for reporting and communication, a shortage of personnel, and a significant number of patients.
#
Artificial intelligence
The greater workload has resulted in alternative approaches regarding workflow and reporting. To allow more time for communication with patients and referring physicians, AI-supported expert systems have increasingly become a topic of interest.
Due to the image-based work in radiology, it offers ideal conditions for the use of AI for evaluation [1] [2] [3] [4]. Artificial intelligence has been incorporated into radiology in stages: in the form of the first expert systems in the 1980 s, in the form of probabilistic systems in the 1990 s, and as increasingly sophisticated deep learning models since the end of the 2000 s [5]. The number of publications addressing AI-based reporting has increased accordingly [4].
The AI systems used in radiology and generally in medicine comprise two fundamental methods: AI data is generated by learning from a human being or by extracting previously unknown information [6] [7].
In radiology, artificial intelligence is primarily used in MRI (37 % of AI systems use MR datasets) followed by CT imaging (29 %), with the most common task being segmentation (39 %) [4]. Research in neuroradiology and chest radiology is currently a main topic of interest [3] [4].
Particularly in areas like oncological imaging where comparison with previous images is essential and scan results typically have to be added to a specific evaluation system, it is helpful when the preliminary work is performed by a corresponding system [8]. Therefore, AI has been implemented for many applications in the diagnosis and segmentation of pulmonary nodules and corresponding research is being conducted [4] [8] [9].
#
Black box problems
Successful use of AI has also been increasingly reported in other areas. A current review regarding the depth of myometrial invasion shows successful diagnosis of this disease using AI. The review shows that various AI systems based on different AI techniques can help to evaluate the depth of myometrial invasion. It also shows limitations with respect to the AI systems and the evaluating radiologists. It is often unclear how an AI system reaches its results [10].
For this reason, “explainable AI” is often promoted and requested. This means that the AI system and its results should be able to be explained.
The term explainable AI refers to a series of processes and methods that allow human users to understand and trust results and output generated by machine learning algorithms. Explainable AI is used to describe an AI model, its expected effects, and potential inaccuracies. It helps to characterize model accuracy, correctness, transparency, and results during the AI-supported decision-making process. Radiologists who regularly use AI applications to optimize their workflow must understand how to achieve results that will save time. Explainable AI is extremely important for creating trust among physicians and patients when AI models are used to help make medical decisions.
The more advanced the AI system, the more difficult it is for human beings to understand how the algorithm arrived at a particular result. The entire calculation process becomes a black box that can no longer be interpreted. These black box models are created directly from the data. Not even the software engineers and data scientists who developed the algorithm can understand or explain exactly what is happening or how the AI algorithm arrived at a certain result.
#
Explainable AI
There are many advantages to the user being able to at least partially understand how an AI-supported system arrived at a certain result.
Image processing AI systems often use the data enrichment technique. This means that image data are modified in many ways in preparation to be analyzed by the neural network. Classic steps for such enrichment are geometric changes, scaling of the region of interest, intentional addition of Gaussian noise, contrast enhancement, gradual potentiation of image data, insertion of Gaussian blurring, and mathematical pruning of the dataset. These mathematical processes take place prior to the actual analysis of the dataset in the neural network. These mathematical models, which are adapted in a complex manner to the relevant task, are often not understandable for the user [11].
Explainability can help developers to ensure that the system functions as expected. It can be necessary to meet regulatory standards or it can be important to allow those affected by a decision to refute or change the result [12].
Explainable AI should follow basic principles to ensure trust between AI and human beings. The US Department of Commerce created an overview:
-
Explanation: The AI system provides or contains accompanying documents or reasons for results and/or processes.
-
Meaningful: The AI system provides explanations that are understandable for the intended consumer.
-
Explanation accuracy: The explanation is adapted specifically to the displayed result. The explanation correctly reflects the reason for generating the output and/or accurately reflects the system’s process.
-
Knowledge limits: A system only operates under conditions for which it was designed and when it reaches sufficient confidence in its output [13].
#
Explainable AI: Sources of error, risks, and subsequent adaptation
A current review from the year 2022 examined which explainability methods were used in radiology studies for the application of AI. The review came to the conclusion that explainability was achieved in 49 % of studies by providing cases/examples. No explainability was offered in 28 % of studies, visualizations and saliency maps were offered as explanations in 18 %, and the results were discussed retrospectively in 5 % [4]. In this context, examples, i. e., image datasets, coded according to the disease are reviewed on the basis of coded sample datasets or test image datasets. As a critical point, the study states that some software uses image datasets from only one hospital and only very small datasets were used in some cases. Using visualization tools, the AI software can show the developer which features in the image or dataset were used to make the primary decision. The maps show corresponding foci that the software used for orientation [4].
The systems and especially primary system testing and the corresponding adaptation of the AI systems during the software training phase typically ensure that the corresponding software functions reasonably in a narrow application field, i. e., in the framework of the learned parameters [14] [15].
To achieve verifiable and highly reliable AI results, labeled datasets are often used to train AI systems. Only image data that has been checked by a human expert and provides a clear result should be used. AI is supplied with the greatest possible amount of such data. The test is then performed – also with a reviewed dataset – and the system results to be expected are positively validated with a high probability. Specific training for a particular use case can result in overfitting of the neural network and thus ultimately in an overoptimistic expectation of the model. Overfitting occurs when an AI system learns to make predictions based on image features that are specific to the training dataset and cannot be generalized to new data.
This can then result in failure of the model in the case of datasets from other hospitals or practices. One example of overoptimization is a prevalence of certain diseases on one scanner. If for organizational reasons the majority of severely ill patients are examined on one scanner, e. g., on “CT1”, but there are additional CT units that are not used for examining this special group of patients, the AI erroneously learns that there is an increased probability for a serious disease based solely on the fact that the examination is performed on this specific scanner (CT1). If the same software is used under other conditions, the factor included in the assessment (CT1) is omitted yielding completely different results [16].
One method to avoid this overoptimization is cross-validation: a sampling procedure for repeated classification of a dataset into independent cohorts for training and testing. The separation of training and test datasets ensures that performance measurements are not distorted by direct overfitting of the model to the data. During cross-validation, the dataset is divided multiple times, the model is trained and evaluated with one subgroup in each case, and the prediction error is averaged over the test runs. The use of cross-validation allows estimation of the generalization performance of an algorithm, determination of the most suitable algorithm from multiple algorithm systems, and adjustment of model hyperparameters, i. e., fine tuning of the settings in the algorithm used to configure and train the model [7] [10].
A further method for targeted and effective selection of important radiological features within an AI algorithm is the “Least Absolute Shrinkage and Selection Operator” (LASSO), which modifies the standard regression methods in that it is limited to a certain subset of all available covariates [5] [17]. Predictor variables that can contribute to overfitting are thus removed. Via manual human-controlled segmentation of lesions in a training environment, the reproducibility of feature detection is determined by various human operators. Non-reproducible features are thus rejected [5].
#
Optimization of AI and minimization of potential weaknesses
A current study on the underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations shows the need for critical examination of AI in radiology. This result has been given significant attention and shows the limitations of the technique. The authors of the study [18] were able to show that classifiers created with the latest computer vision techniques consistently and selectively underdiagnosed underserved patient groups and that the rate of underdiagnosis in these groups, e. g., Hispanic patients, was also significantly higher without the use of AI-based systems. This means that a group that is already underserved in the current medical system is also underserved by AI-supported systems, thus showing how important it is to continuously review and examine the algorithms [18].
Software systems that test AI systems for errors and correctness of explanation models to prevent such errors are now being developed [19] [20].
AI-based reporting systems undoubtedly support reporting [2] [5]. Many AI-based systems are used in chest radiography. For example, they achieve impressive accuracy between 0.935 and 0.978 in the diagnosis of pneumothoraces [21] [22] [23]. Many AI approaches with very high diagnostic accuracy were also introduced for diagnosing COVID [24] [25] [26].
In the ideal case, AI systems should be trained and validated for the entire spectrum of possible diseases in datasets of varying quality within a certain examination modality. However, this is not yet possible due to the high variability in real clinical situations [7] [27]. Therefore, AI systems are currently only designed for a specific application and are limited to this application.
As a current development, in addition to deep learning and deep learning networks, radiomics is a topic of interest as a future technique that may provide additional advantages with respect to reporting. After segmentation of the corresponding morphological correlate, e. g., “pulmonary nodule”, AI systems use an assessment cascade containing as many learned features as possible to evaluate the detected pulmonary nodule [9].
AI studies primarily from basic research with promising results often show only limited potential for generalizability and immediate clinical implementation. There is a high risk of distortion, particularly due to the lack of external validation. Moreover, a clear and understandable explanation of how the system works and which limitations need to be taken into consideration is missing in most cases [4] [15].
In unfavorable situations, even minor changes to input data that are often invisible to the human eye can result in dramatically different classifications [5]. This different and in the worst case scenario incorrect classification results from the fact that complex neural networks also overemphasize certain features. Therefore, for example, different body types outside the norm are a problem for AI. One study from the year 2023 examined the possibility of automated volumetric analysis of the abdominal wall musculature. In this study, the AI system unsuccessfully performed automated muscle volumetric analysis in one patient with well-defined abdominal muscles apparently because the system had been trained on a certain ratio of fat to muscle. The muscles that were clearly visible to the radiologist were not correctly detected by the AI system because the subcutaneous fat tissue that is otherwise typical and was apparently always present in AI training was barely visible [28].
In connection with incorrect classification, one study shows that an imperceptible, non-random perturbation of an image to be evaluated can arbitrarily change the prediction of the neural network in spite of sufficient training. The reason for the error is complex. A sufficiently trained neural network is robust with respect to minor perturbations in the entered image dataset. A minor perturbation cannot change the object category of an image. However, there are regions within the detection matrix that result in a significant deviation in results in the network. If the perturbation or deviation is present in this image region, a maximum prediction error occurs. In information technology, the term “adversarial examples” is used in this case.
These adversarial examples are relatively robust – even if the neural network was trained with various subsets of the training data. This means that the neural network is specifically susceptible to a discrete case in the data to be analyzed and that especially “deep layer” networks that learned by means of backpropagation intrinsically have blind spots. Interestingly, the specific nature of these perturbations is not a random artifact of the learning: The same perturbation can cause a different network that was trained on another subset of the dataset to classify the same input incorrectly [16] [29].
The best use of AI in medicine is as a reliable assistant requiring supervision ([Fig. 2]). The AI system ideally indicates possible errors in the relevant analysis. One possible method is for the AI system to provide reliability intervals so that the medical expert can determine when a closer look is warranted [30].
The use of AI for assisted reporting in radiology is clearly the future, particularly when radiologists successfully use the strengths of the technology and know and avoid the weaknesses. According to Curtis P. Langlot: “‚Will AI replace radiologists?‘ is the wrong question. The right answer is: Radiologists who use AI will replace radiologists who don’t.“ [2].
The use of AI systems can already begin today in patient management and appointment management, can continue to be used in reporting, and can provide significant support in the scientific evaluation of acquired data in order to save time and resources ([Fig. 3]).
Thus, in the future, radiologists will ideally be able to use multiple AI-supported systems to ease their workload in various ways in the daily routine at the hospital/practice.
#
Radiology in communication
The systematic use of all AI-assisted reporting options will save radiologists a significant amount of time ([Fig. 4]). Radiologists can and should use this for patients and referring colleagues.
There are significant differences between radiologists working in a practice/health care center and those working at a hospital. In most hospitals and clinics, the radiology department provides findings in writing, possibly combined with a clinical discussion, and the treating physician communicates with the patient. This concept is already established and absolutely desired by clinical colleagues [31]. It must be stated that interdisciplinary case discussions in hospitals improve the assessment of a patient’s clinical picture as a result of interactive communication which benefits patients, clinicians, and radiologists. This shows how important it is to invest the time gained by the use of AI in communication [32] [33] [34].
If specialist training in radiology takes place in a practice or at a health care center, findings are often still communicated to the patient directly by the physician. Patients want this communication and demand relatively little to be satisfied with the doctor-patient interaction [35]. A lack of time and an excessive workload in recent years have resulted in this direct communication of findings being less common – resulting in dissatisfaction on the part of both patients and physicians [35] [36] [37].
Radiologists complain that the ability to adequately speak with patients is insufficient and professional visibility is also often insufficient [38]. A non-representative patient questionnaire performed as part of one of our studies showed that 71 % of 386 surveyed patients reported that they did not have an opportunity to discuss the examination with the radiologist. This trend has also been confirmed by a study by the RSNA. In a large survey among members of the RSNA, 73 % stated that they do not have enough time to speak with patients due to workload and work density [39].
A well-written case history by a well-known radiologist shows that radiology can indeed be part of the clinical concept in patient diagnostics when small but significant things in the patient history that can often diagnose a complex clinical picture are taken into consideration when communicating with the patient [40].
The situation is slightly different in the case of oncology patients. It is often virtually impossible for radiologists to provide information about further treatment or in the case of disease progression about a change in treatment since oncological treatments are highly complex. It should ideally be clarified in advance with the patient and the referring oncologist that the referring oncologist will discuss findings with the patient.
In other cases, e. g., after trauma and corresponding diagnosis of exclusion, patients and also treating colleagues appreciate having a brief discussion with the radiologist about the disease and treatment to be expected. In addition, immediate communication of findings significantly shortens the time to treatment for the patient or in the case of diagnosis of exclusion the patient can immediately resume usual activities [41].
A rarely considered secondary effect is that the physician can have a positive effect on upcoming treatment as a result of expectation effects. Such side effect-free treatment effects should be increasingly considered and implemented in radiology and treating radiology [42]. With respect to communication and the communication of findings, radiologists can use expectation effects to create pretherapeutic expectations that can have a positive effect on upcoming treatment [43] [44].
An initial consultation with a radiologist can also relieve some of the burden on the health care system. For example, informing patients of the low-risk nature of their disease can prevent them from seeking care from another discipline. This requires time and background knowledge [45].
Referring physicians have clear demands regarding radiology: Diagnostic reports should be understandable and address the particular medical issue. In addition, quick communication of findings is desired. However, radiologists have stated that the requested examination method is sometimes selected incorrectly and the medical question is often not formulated precisely enough [46]. This shows that further intensive work on communication is needed on both sides. Personal contact should be established where appropriate or work shadowing should even be performed in order to optimize collaboration with respect to patients. With the systematic use of AI as a reporting tool, a time savings can be achieved resulting in better patient-oriented collaboration possibilities ([Fig. 4] and [Tab. 1]).
#
Use of time gained as a result of AI
In the ideal case, the use of AI-supported radiology systems saves time. This time can be used in different ways. There is the risk in our health care system that, for economic reasons, the time gained by using AI will not be invested in communication but rather will be seen as an opportunity to further increase the number of patients examined per time unit. A modification of compensation would be one possibility to make doctor-patient communication more attractive. However, there is still the risk here that the time will only partly be used for communication so that the rest of the time can be invested in further increasing the number of examinations. Every radiologist should ultimately decide for themselves how to use the time gained as a result of the use of AI. Communication with referring physicians and patients is certainly desirable but it is not the only possibility.
#
Summary
Considering the numerous examinations, patients, reports, and referring physicians, radiology must above all be reliable, safe, and communicative. The systematic use of AI-supported systems helps radiologists to save time. AI must be implemented correctly and in a targeted manner. Radiologists must be familiar with the strengths and weaknesses of the AI system being used in order to optimally lighten the radiologist’s workload. Since current AI systems are optimized for a narrow field of activity, multiple systems need to be used for the best results.
Under optimal conditions, the use of AI systems results in a time savings for radiologists. The additional time can be used in various ways. More patients can be examined, examinations can be more comprehensive, or the time can be used for interaction.
In my opinion, the time gained as a result of the use of AI should be used for targeted communication with patients and referring colleagues. A targeted exchange results in better treatment of patients and higher satisfaction among radiologists.
Explainable artificial intelligence is the future of radiology. It will require human supervision, will save time, and will improve diagnosis.
The popular narrative of “device medicine” could change to “talking medicine”. Radiologist would become a serious clinical partner – at least a chance.
#
#
-
References
- 1 Thrall JH, Li X, Li Q. et al. Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success. Journal of the American College of Radiology: JACR 2018; 15: 504-508 DOI: 10.1016/j.jacr.2017.12.026.
- 2 Langlotz CP. Will Artificial Intelligence Replace Radiologists?. Radiol Artif Intell 2019; 1: e190058 DOI: 10.1148/ryai.2019190058.
- 3 Liu PR, Lu L, Zhang JY. et al. Application of Artificial Intelligence in Medicine: An Overview. Curr Med Sci 2021; 41: 1105-1115 DOI: 10.1007/s11596-021-2474-3.
- 4 Kelly BS, Judge C, Bollard SM. et al. Radiology artificial intelligence: a systematic review and evaluation of methods (RAISE). European radiology 2022; 32: 7998-8007 DOI: 10.1007/s00330-022-08784-6.
- 5 Savadjiev P, Chong J, Dohan A. et al. Demystification of AI-driven medical image interpretation: past, present and future. European radiology 2019; 29: 1616-1624 DOI: 10.1007/s00330-018-5674-x.
- 6 Poplin R, Varadarajan AV, Blumer K. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng 2018; 2: 158-164
- 7 Chen P-HC, Liu Y, Peng L. How to develop machine learning models for healthcare. Nat Mater 2019; 18: 410-414
- 8 Feuerecker B, Heimer MM, Geyer T. et al. Artificial Intelligence in Oncological Hybrid Imaging. Fortschr Röntgenstr 2023; 195: 105-114 DOI: 10.1055/a-1909-7013.
- 9 Binczyk F, Prazuch W, Bozek P. et al. Radiomics and artificial intelligence in lung cancer screening. Transl Lung Cancer Res 2021; 10: 1186-1199 DOI: 10.21037/tlcr-20-708.
- 10 Petrila O, Stefan AE, Gafitanu D. et al. The Applicability of Artificial Intelligence in Predicting the Depth of Myometrial Invasion on MRI Studies – A Systematic Review. Diagnostics (Basel) 2023; 13 DOI: 10.3390/diagnostics13152592.
- 11 Hussain Z, Gimenez F, Yi D. et al. Differential Data Augmentation Techniques for Medical Imaging Classification Tasks. AMIA Annu Symp Proc 2017; 2017: 979-984
- 12 Bundy A, Crowcroft J, Ghahramani Z. et al. Explainable AI: the basics. In. London: The royal society; 2019: 29
- 13 Phillips P, Hahn C, Fontana PYA. et al. Four Principles of Explainable Artificial Intelligence. Interagency or Internal Report 8312 2021.
- 14 Bradshaw TJ, Huemann Z, Hu J. et al. A Guide to Cross-Validation for Artificial Intelligence in Medical Imaging. Radiol Artif Intell 2023; 5: e220232 DOI: 10.1148/ryai.220232.
- 15 Moassefi M, Rouzrokh P, Conte GM. et al. Reproducibility of Deep Learning Algorithms Developed for Medical Imaging Analysis: A Systematic Review. J Digit Imaging 2023; DOI: 10.1007/s10278-023-00870-5.
- 16 DeGrave AJ, Janizek JD, Lee SI. AI for radiographic COVID-19 detection selects shortcuts over signal. medRxiv 2020; DOI: 10.1101/2020.09.13.20193565.
- 17 Tibshirani R. Regression Shrinkage and Selection via the Lasso. JSTOR 1996; 58: 267-288
- 18 Seyyed-Kalantari L, Zhang H, McDermott MBA. et al. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat Med 2021; 27: 2176-2182 DOI: 10.1038/s41591-021-01595-0.
- 19 Kuhn R. Explainability, Verification, and Validation for Assured Autonomy and AI. In; 2022
- 20 Hedström A, Weber L, Bareeva D. et al. Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond. Journal of Machine Learning Research 2023; 24: 1-11
- 21 Wang Q, Liu Q, Luo G. et al. Automated segmentation and diagnosis of pneumothorax on chest X-rays with fully convolutional multi-scale ScSE-DenseNet: a retrospective study. BMC Med Inform Decis Mak 2020; 20: 317 DOI: 10.1186/s12911-020-01325-5.
- 22 Moses DA. Deep learning applied to automatic disease detection using chest X-rays. J Med Imaging Radiat Oncol 2021; 65: 498-517 DOI: 10.1111/1754-9485.13273.
- 23 Wang X, Yang S, Lan J. et al. Automatic Segmentation of Pneumothorax in Chest Radiographs Based on a Two-Stage Deep Learning Method. IEEE Transactions on Cognitive and Developmental Systems 2022; 14: 205-218
- 24 Baltazar LR, Manzanillo MG, Gaudillo J. et al. Artificial intelligence on COVID-19 pneumonia detection using chest xray images. PloS one 2021; 16: e0257884 DOI: 10.1371/journal.pone.0257884.
- 25 Dey S, Bhattacharya R, Malakar S. et al. Choquet fuzzy integral-based classifier ensemble technique for COVID-19 detection. Comput Biol Med 2021; 135: 104585 DOI: 10.1016/j.compbiomed.2021.104585.
- 26 Nasiri H, Alavi SA. A Novel Framework Based on Deep Learning and ANOVA Feature Selection Method for Diagnosis of COVID-19 Cases from Chest X-Ray Images. Comput Intell Neurosci 2022; 2022: 4694567 DOI: 10.1155/2022/4694567.
- 27 Mongan J, Kalpathy-Cramer J, Flanders A. et al. RSNA-MICCAI Panel Discussion: Machine Learning for Radiology from Challenges to Clinical Applications. Radiol Artif Intell 2021; 3: e210118 DOI: 10.1148/ryai.2021210118.
- 28 Pooler BD, Garrett JW, Southard AM. et al. Technical Adequacy of Fully Automated Artificial Intelligence Body Composition Tools: Assessment in a Heterogeneous Sample of External CT Examinations. Am J Roentgenol American journal of roentgenology 2023; 221: 124-134 DOI: 10.2214/Am J Roentgenol.22.28745.
- 29 Szegedy CZW, Sutskever I. Intriguing properties of neural networks. In. arXiv:1312.6199: Google; 2013
- 30 Li D, Hu L, Peng X. et al. A proposed artificial intelligence workflow to address application challenges leveraged on algorithm uncertainty. iScience 2022; 25: 103961 DOI: 10.1016/j.isci.2022.103961.
- 31 Erdogan N, Imamoglu H, Gorkem SB. et al. Preferences of referring physicians regarding the role of radiologists as direct communicators of test results. Diagn Interv Radiol 2017; 23: 81-85 DOI: 10.5152/dir.2016.16325.
- 32 Dalla PalmaL, Stacul F, Meduri S. et al. Relationships between radiologists and clinicians: results from three surveys. Clin Radiol 2000; 55: 602-605 DOI: 10.1053/crad.2000.0495.
- 33 Cabarrus M, Naeger DM, Rybkin A. et al. Patients Prefer Results From the Ordering Provider and Access to Their Radiology Reports. Journal of the American College of Radiology: JACR 2015; 12: 556-562 DOI: 10.1016/j.jacr.2014.12.009.
- 34 Dendl LM, Teufel A, Schleder S. et al. Analysis of Radiological Case Presentations and their Impact on Therapy and Treatment Concepts in Internal Medicine. Fortschr Röntgenstr 2017; 189: 239-246 DOI: 10.1055/s-0042-118884.
- 35 Stueckle CA, Talarczyk S, Hackert B. et al. [Patient satisfaction with radiologists in private practice]. Der Radiologe 2020; 60: 70-76 DOI: 10.1007/s00117-019-00609-w.
- 36 Reiner BI. Strategies for radiology reporting and communication part 3: patient communication and education. J Digit Imaging 2013; 26: 995-1000 DOI: 10.1007/s10278-013-9647-y.
- 37 Rosenkrantz AB, Pysarenko K. The Patient Experience in Radiology: Observations From Over 3,500 Patient Feedback Reports in a Single Institution. Journal of the American College of Radiology: JACR 2016; 13: 1371-1377 DOI: 10.1016/j.jacr.2016.04.034.
- 38 European Society of R. The identity and role of the radiologist in 2020: a survey among ESR full radiologist members. Insights Imaging 2020; 11: 130 DOI: 10.1186/s13244-020-00945-9.
- 39 Kemp JL, Mahoney MC, Mathews VP. et al. Patient-centered Radiology: Where Are We, Where Do We Want to Be, and How Do We Get There?. Radiology 2017; 285: 601-608 DOI: 10.1148/radiol.2017162056.
- 40 Flemming DJ, Gunderman RB. Should We Think of Radiologists as Nonclinicians?. Journal of the American College of Radiology: JACR 2016; 13: 875-877 DOI: 10.1016/j.jacr.2016.02.026.
- 41 Hardy M, Snaith B, Scally A. The impact of immediate reporting on interpretive discrepancies and patient referral pathways within the emergency department: a randomised controlled trial. Br J Radiol 2013; 86: 20120112 DOI: 10.1259/bjr.20120112.
- 42 Stueckle CA, Hackert B, Talarczyk S. et al. The physician as a success determining factor in CT-guided pain therapy. BMC Med Imaging 2021; 21: 11 DOI: 10.1186/s12880-020-00544-6.
- 43 Bingel U, Wanigasekera V, Wiech K. et al. The effect of treatment expectation on drug efficacy: imaging the analgesic benefit of the opioid remifentanil. Sci Transl Med 2011; 3: 70ra14 DOI: 10.1126/scitranslmed.3001244.
- 44 Sinke C, Schmidt K, Forkmann K. et al. Expectation influences the interruptive function of pain: Behavioural and neural findings. European journal of pain 2017; 21: 343-356 DOI: 10.1002/ejp.928.
- 45 Berkefeld J. Vaskuläre Zufallsbefunde in der MRT des Schädels. Radiologie up2date 2022; 22: 301-317
- 46 Espeland A, Baerheim A. General practitioners’ views on radiology reports of plain radiography for back pain. Scand J Prim Health Care 2007; 25: 15-19 DOI: 10.1080/02813430600973459.
Correspondence
Publikationsverlauf
Eingereicht: 26. Juli 2023
Angenommen nach Revision: 27. Januar 2024
Artikel online veröffentlicht:
03. April 2024
© 2024. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Thrall JH, Li X, Li Q. et al. Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success. Journal of the American College of Radiology: JACR 2018; 15: 504-508 DOI: 10.1016/j.jacr.2017.12.026.
- 2 Langlotz CP. Will Artificial Intelligence Replace Radiologists?. Radiol Artif Intell 2019; 1: e190058 DOI: 10.1148/ryai.2019190058.
- 3 Liu PR, Lu L, Zhang JY. et al. Application of Artificial Intelligence in Medicine: An Overview. Curr Med Sci 2021; 41: 1105-1115 DOI: 10.1007/s11596-021-2474-3.
- 4 Kelly BS, Judge C, Bollard SM. et al. Radiology artificial intelligence: a systematic review and evaluation of methods (RAISE). European radiology 2022; 32: 7998-8007 DOI: 10.1007/s00330-022-08784-6.
- 5 Savadjiev P, Chong J, Dohan A. et al. Demystification of AI-driven medical image interpretation: past, present and future. European radiology 2019; 29: 1616-1624 DOI: 10.1007/s00330-018-5674-x.
- 6 Poplin R, Varadarajan AV, Blumer K. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng 2018; 2: 158-164
- 7 Chen P-HC, Liu Y, Peng L. How to develop machine learning models for healthcare. Nat Mater 2019; 18: 410-414
- 8 Feuerecker B, Heimer MM, Geyer T. et al. Artificial Intelligence in Oncological Hybrid Imaging. Fortschr Röntgenstr 2023; 195: 105-114 DOI: 10.1055/a-1909-7013.
- 9 Binczyk F, Prazuch W, Bozek P. et al. Radiomics and artificial intelligence in lung cancer screening. Transl Lung Cancer Res 2021; 10: 1186-1199 DOI: 10.21037/tlcr-20-708.
- 10 Petrila O, Stefan AE, Gafitanu D. et al. The Applicability of Artificial Intelligence in Predicting the Depth of Myometrial Invasion on MRI Studies – A Systematic Review. Diagnostics (Basel) 2023; 13 DOI: 10.3390/diagnostics13152592.
- 11 Hussain Z, Gimenez F, Yi D. et al. Differential Data Augmentation Techniques for Medical Imaging Classification Tasks. AMIA Annu Symp Proc 2017; 2017: 979-984
- 12 Bundy A, Crowcroft J, Ghahramani Z. et al. Explainable AI: the basics. In. London: The royal society; 2019: 29
- 13 Phillips P, Hahn C, Fontana PYA. et al. Four Principles of Explainable Artificial Intelligence. Interagency or Internal Report 8312 2021.
- 14 Bradshaw TJ, Huemann Z, Hu J. et al. A Guide to Cross-Validation for Artificial Intelligence in Medical Imaging. Radiol Artif Intell 2023; 5: e220232 DOI: 10.1148/ryai.220232.
- 15 Moassefi M, Rouzrokh P, Conte GM. et al. Reproducibility of Deep Learning Algorithms Developed for Medical Imaging Analysis: A Systematic Review. J Digit Imaging 2023; DOI: 10.1007/s10278-023-00870-5.
- 16 DeGrave AJ, Janizek JD, Lee SI. AI for radiographic COVID-19 detection selects shortcuts over signal. medRxiv 2020; DOI: 10.1101/2020.09.13.20193565.
- 17 Tibshirani R. Regression Shrinkage and Selection via the Lasso. JSTOR 1996; 58: 267-288
- 18 Seyyed-Kalantari L, Zhang H, McDermott MBA. et al. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat Med 2021; 27: 2176-2182 DOI: 10.1038/s41591-021-01595-0.
- 19 Kuhn R. Explainability, Verification, and Validation for Assured Autonomy and AI. In; 2022
- 20 Hedström A, Weber L, Bareeva D. et al. Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond. Journal of Machine Learning Research 2023; 24: 1-11
- 21 Wang Q, Liu Q, Luo G. et al. Automated segmentation and diagnosis of pneumothorax on chest X-rays with fully convolutional multi-scale ScSE-DenseNet: a retrospective study. BMC Med Inform Decis Mak 2020; 20: 317 DOI: 10.1186/s12911-020-01325-5.
- 22 Moses DA. Deep learning applied to automatic disease detection using chest X-rays. J Med Imaging Radiat Oncol 2021; 65: 498-517 DOI: 10.1111/1754-9485.13273.
- 23 Wang X, Yang S, Lan J. et al. Automatic Segmentation of Pneumothorax in Chest Radiographs Based on a Two-Stage Deep Learning Method. IEEE Transactions on Cognitive and Developmental Systems 2022; 14: 205-218
- 24 Baltazar LR, Manzanillo MG, Gaudillo J. et al. Artificial intelligence on COVID-19 pneumonia detection using chest xray images. PloS one 2021; 16: e0257884 DOI: 10.1371/journal.pone.0257884.
- 25 Dey S, Bhattacharya R, Malakar S. et al. Choquet fuzzy integral-based classifier ensemble technique for COVID-19 detection. Comput Biol Med 2021; 135: 104585 DOI: 10.1016/j.compbiomed.2021.104585.
- 26 Nasiri H, Alavi SA. A Novel Framework Based on Deep Learning and ANOVA Feature Selection Method for Diagnosis of COVID-19 Cases from Chest X-Ray Images. Comput Intell Neurosci 2022; 2022: 4694567 DOI: 10.1155/2022/4694567.
- 27 Mongan J, Kalpathy-Cramer J, Flanders A. et al. RSNA-MICCAI Panel Discussion: Machine Learning for Radiology from Challenges to Clinical Applications. Radiol Artif Intell 2021; 3: e210118 DOI: 10.1148/ryai.2021210118.
- 28 Pooler BD, Garrett JW, Southard AM. et al. Technical Adequacy of Fully Automated Artificial Intelligence Body Composition Tools: Assessment in a Heterogeneous Sample of External CT Examinations. Am J Roentgenol American journal of roentgenology 2023; 221: 124-134 DOI: 10.2214/Am J Roentgenol.22.28745.
- 29 Szegedy CZW, Sutskever I. Intriguing properties of neural networks. In. arXiv:1312.6199: Google; 2013
- 30 Li D, Hu L, Peng X. et al. A proposed artificial intelligence workflow to address application challenges leveraged on algorithm uncertainty. iScience 2022; 25: 103961 DOI: 10.1016/j.isci.2022.103961.
- 31 Erdogan N, Imamoglu H, Gorkem SB. et al. Preferences of referring physicians regarding the role of radiologists as direct communicators of test results. Diagn Interv Radiol 2017; 23: 81-85 DOI: 10.5152/dir.2016.16325.
- 32 Dalla PalmaL, Stacul F, Meduri S. et al. Relationships between radiologists and clinicians: results from three surveys. Clin Radiol 2000; 55: 602-605 DOI: 10.1053/crad.2000.0495.
- 33 Cabarrus M, Naeger DM, Rybkin A. et al. Patients Prefer Results From the Ordering Provider and Access to Their Radiology Reports. Journal of the American College of Radiology: JACR 2015; 12: 556-562 DOI: 10.1016/j.jacr.2014.12.009.
- 34 Dendl LM, Teufel A, Schleder S. et al. Analysis of Radiological Case Presentations and their Impact on Therapy and Treatment Concepts in Internal Medicine. Fortschr Röntgenstr 2017; 189: 239-246 DOI: 10.1055/s-0042-118884.
- 35 Stueckle CA, Talarczyk S, Hackert B. et al. [Patient satisfaction with radiologists in private practice]. Der Radiologe 2020; 60: 70-76 DOI: 10.1007/s00117-019-00609-w.
- 36 Reiner BI. Strategies for radiology reporting and communication part 3: patient communication and education. J Digit Imaging 2013; 26: 995-1000 DOI: 10.1007/s10278-013-9647-y.
- 37 Rosenkrantz AB, Pysarenko K. The Patient Experience in Radiology: Observations From Over 3,500 Patient Feedback Reports in a Single Institution. Journal of the American College of Radiology: JACR 2016; 13: 1371-1377 DOI: 10.1016/j.jacr.2016.04.034.
- 38 European Society of R. The identity and role of the radiologist in 2020: a survey among ESR full radiologist members. Insights Imaging 2020; 11: 130 DOI: 10.1186/s13244-020-00945-9.
- 39 Kemp JL, Mahoney MC, Mathews VP. et al. Patient-centered Radiology: Where Are We, Where Do We Want to Be, and How Do We Get There?. Radiology 2017; 285: 601-608 DOI: 10.1148/radiol.2017162056.
- 40 Flemming DJ, Gunderman RB. Should We Think of Radiologists as Nonclinicians?. Journal of the American College of Radiology: JACR 2016; 13: 875-877 DOI: 10.1016/j.jacr.2016.02.026.
- 41 Hardy M, Snaith B, Scally A. The impact of immediate reporting on interpretive discrepancies and patient referral pathways within the emergency department: a randomised controlled trial. Br J Radiol 2013; 86: 20120112 DOI: 10.1259/bjr.20120112.
- 42 Stueckle CA, Hackert B, Talarczyk S. et al. The physician as a success determining factor in CT-guided pain therapy. BMC Med Imaging 2021; 21: 11 DOI: 10.1186/s12880-020-00544-6.
- 43 Bingel U, Wanigasekera V, Wiech K. et al. The effect of treatment expectation on drug efficacy: imaging the analgesic benefit of the opioid remifentanil. Sci Transl Med 2011; 3: 70ra14 DOI: 10.1126/scitranslmed.3001244.
- 44 Sinke C, Schmidt K, Forkmann K. et al. Expectation influences the interruptive function of pain: Behavioural and neural findings. European journal of pain 2017; 21: 343-356 DOI: 10.1002/ejp.928.
- 45 Berkefeld J. Vaskuläre Zufallsbefunde in der MRT des Schädels. Radiologie up2date 2022; 22: 301-317
- 46 Espeland A, Baerheim A. General practitioners’ views on radiology reports of plain radiography for back pain. Scand J Prim Health Care 2007; 25: 15-19 DOI: 10.1080/02813430600973459.