Subscribe to RSS
DOI: 10.1055/a-1036-6114
Feedback from artificial intelligence improved the learning of junior endoscopists on histology prediction of gastric lesions
Corresponding author
Publication History
submitted 25 June 2019
accepted after revision 09 October 2019
Publication Date:
22 January 2020 (online)
Abstract
Background and study aims Artificial intelligence (AI)-assisted image classification has been shown to have high accuracy on endoscopic diagnosis. We evaluated the potential effects of use of an AI-assisted image classifier on training of junior endoscopists for histological prediction of gastric lesions.
Methods An AI image classifier was built on a convolutional neural network with five convolutional layers and three fully connected layers A Resnet backbone was trained by 2,000 non-magnified endoscopic gastric images. The independent validation set consisted of another 1,000 endoscopic images from 100 gastric lesions. The first part of the validation set was reviewed by six junior endoscopists and the prediction of AI was then disclosed to three of them (Group A) while the remaining three (Group B) were not provided this information. All endoscopists reviewed the second part of the validation set independently.
Results The overall accuracy of AI was 91.0 % (95 % CI: 89.2–92.7 %) with 97.1 % sensitivity (95 % CI: 95.6–98.7%), 85.9 % specificity (95 % CI: 83.0–88.4 %) and 0.91 area under the ROC (AUROC) (95 % CI: 0.89–0.93). AI was superior to all junior endoscopists in accuracy and AUROC in both validation sets. The performance of Group A endoscopists but not Group B endoscopists improved on the second validation set (accuracy 69.3 % to 74.7 %; P = 0.003).
Conclusion The trained AI image classifier can accurately predict presence of neoplastic component of gastric lesions. Feedback from the AI image classifier can also hasten the learning curve of junior endoscopists in predicting histology of gastric lesions.
#
Introduction
Gastric cancer is the fifth most common cancer and accounts for more than 800,000 deaths worldwide each year [1]. Early detection and accurate characterization of gastric neoplastic lesions during endoscopy is of paramount importance because the prognosis of early gastric cancer is excellent [2] [3]. However, early gastric neoplastic lesions are usually subtle and easily missed [4]. Use of optical magnified endoscopy in combination with chromoendoscopy or image-enhanced endoscopy such as narrow-band imaging (NBI) has been suggested to help differentiate and characterize early gastric lesions by enhancing the microsurface and microvascular pattern. In particular, irregular microsurface and microvascular pattern under NBI examination was associated with presence of intraepithelial neoplasia [5] [6] [7] [8] [9]. Nevertheless, this kind of endoscopic diagnostic skill requires a considerable amount of training and experience, which may not be readily available in most endoscopy units.
Absent reliable histological prediction of endoscopic gastric lesions, the gold standard for diagnosis of gastric lesions usually requires multiple biopsies or even total en bloc resection, as a single biopsy may miss the most advanced pathology of a lesion. However, processing of multiple biopsies is costly and complete excision of large gastric lesions is technical challenging [10]. Sampling error also can produce false-negative results [11]. With rapid development of artificial intelligence (AI) in endoscopy, a pilot study has shown the possibility of using AI for accurate detection of early gastric lesions [12]. A recent article also showed the potential of AI in predicting depth of invasion of gastric lesions [13].
So far, however, there are no data on investigations specifically of the role of AI in training of junior endoscopists. In this study, we assessed the role of AI in training junior endoscopists in predicting histology of endoscopic gastric lesions.
#
Method
Setting
The study was conducted in the Integrated Endoscopy Center of the Queen Mary Hospital of Hong Kong, which is a major regional hospital serving the Hong Kong West Cluster and a university teaching hospital. The study protocol was approved by the Institutional Review Board of the Hospital Authority Hong Kong West Cluster and the University of Hong Kong.
All baseline endoscopies were performed with non-optical magnifying gastroscope (GIF-HQ290 model and CV-290 video system, Olympus, Tokyo, Japan).
In this study, we included only gastric lesions with Paris Classification type 0-IIa, IIb, IIc or Is. In addition to elevated lesions, subtle mucosal changes or ulcer scars that have similar shapes to IIc lesions were also included. Still endoscopic images were retrieved from the electronic patient record system or the archive endoscopic video system of our endoscopy unit. Image resolution was at least 720 × 526 pixels and images were obtained under NBI. NBI was used as our previous study had demonstrated its superiority over white light for AI interpretation [14]. The gold standard was the final gastric pathology which was based on multiple biopsies or total endoscopic resection of the lesion, and classified according to the WHO classification [15]. Neoplastic lesions were defined pathologically as presence of intraepithelial neoplasia (dysplasia) or adenocarcinoma in the most advanced histology of a lesion. Non-neoplastic lesions were defined as absence of intraepithelial neoplasia (dysplasia) or adenocarcinoma in any part of a lesion.
#
Building the AI image classifier and training set
An AI image classifier was built on a convolutional neural network (CNN) with five convolutional layers and three fully connected layers by using endoscopic images of gastric lesions obtained between January 2013 and December 2016. The AI image classifier was based on a pre-trained ResNet CNN backbone. All the training images were pre-screened by an experienced endoscopist (TKLL), who had performed more than 4,000 image-enhanced upper endoscopies with NBI. Multiple images per lesion were obtained in the training set by image augmentation including rotation, flipping, and reversing to expand the training set. The region of interest (ROI) within the endoscopic images (300 × 300 pixels) was randomly highlighted. All images that contained motion artefact, were out of focus, had inappropriate brightness or were covered with mucus were excluded. The final training set consisted of 2,000 ROI images (1,000 ROI images from 170 neoplastic lesions and 1,000 ROI images from 230 non-neoplastic lesions). A total of 10 % of the training images were randomly chosen as an internal validation set with 99.5 % internal accuracy.
#
Validation set
The independent validation set consisted of another 1,000 ROI selected from endoscopic images of 100 gastric lesions obtained between January 2017 and January 2019. The ROI within the endoscopic images was selected as described for the training set. To minimize selection bias, 10 ROIs were randomly selected from a single endoscopic image of a lesion. The ROI images were then analyzed by the trained AI image classifier to predict presence of neoplastic lesion ([Fig. 1]).
The validation set was randomly divided into two parts with 500 ROIs in each part. Six junior endoscopists (Endoscopist I to VI) who had performed more than 1,000 upper endoscopies and had undergone special NBI training tutorials on characterizing gastric lesions were asked to comment on whether the ROIs from the first part of the validation set were neoplastic lesions. After the first half of the validation set was reviewed, the prediction result of AI was disclosed to three of them (Group A endoscopists: I, II, III) while the remaining three (Group B endoscopists: IV, V VI) were not provided this information. All six endoscopists then reviewed the second part of the validation set ([Fig. 2]). As a further control, a senior endoscopist who had performed more than 4,000 upper endoscopies with special NBI training on characterizing gastric lesions was also involved in reviewing the validation set.
#
Statistical analysis
We assumed that AI was superior to an endoscopist and that the accuracy of AI image classifier was 90 %. Assuming a difference of 20 % in accuracy and with a statistical power of 80 % and a two-sided significance level of 0.05, 50 ROI were needed in each study arm. Categorical data were compared by the χ2-test or Fisher Exact test where appropriate. Numerical data were analyzed by the Student’s t-test. Statistical significance was taken as a two-sided P < 0.05. For multiple comparisons, the P value was adjusted by Bonferroni correction. A two-by-two table was constructed using the predicted and actual outcome to calculate different domains in the diagnostic test with sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy. Confidence intervals (CIs) used for sensitivity, specificity and accuracy were Clopper-Pearson CIs. CIs for predictive values were the standard logit CIs. All statistical analysis was performed by SPSS statistics software (version 19.0, SPSS, Chicago, Illinois, United States).
#
#
Results
Clinicopathological characteristics of the gastric lesions in the validation set are summarized in [Table 1]. Mean lesion size was 14.9 mm (range: 5 to 40 mm) and 71 were located at the antrum. The majority of the lesions were Paris Type 0 IIa (55.0%, n = 55) followed by IIb lesion (22.0 %, n = 22), Is lesion (12.0 %, n = 12) and IIc lesion (11.0 %, n = 11). Forty-eight were neoplastic lesions including 13 adenocarcinomas, five high-grade dysplasias and 30 low-grade dysplasias.
AUROC, area under the receiver operating characteristics curve; CI, confidence interval
Performance of trained AI on validation set
Overall accuracy of AI for prediction of neoplasia was 91.0 % (95% CI: 89.1–92.7 %), with 97.3 % sensitivity (95 % CI: 95.4–98.5 %), 85.1 % specificity (95 % CI: 81.7–88.1 %), 85.9 % PPV (95 % CI: 82.7–88.7 %), 97.1 % NPV (95 % CI: 95.1–98.4 %) and 0.92 AUROC (95 %CI: 0.89–0.93). The AUROC curve for AI prediction in the body was significantly better than in the antrum (0.95 vs 0.90, P = 0.01) and the corresponding accuracy of AI in the body was also better than in the antrum (0.95 vs 0.90, P = 0.01). In terms of morphology, AI had statistically higher accuracy (98.2 % vs 91.4 % and 83.6 %, P < 0.05) and AUROC (0.99 vs 0.92 and 0.91, P < 0.05) in analyzing IIc lesions than IIa and IIb lesions ([Table 2]). Overall, AI is more confident in prediction of non-neoplastic than neoplastic lesions (84.5 % vs 81.8 %, P < 0.01).
AUROC, area under the receiver operating characteristics curve.
#
Validation set tesults
Performance of AI and the six junior endoscopists on the first part of the validation set is summarized in [Table 3]. AI was better than all six endoscopists in accuracy (all P < 0.01) and AUROC (all P < 0.01). AI was also superior to individual endoscopists in sensitivity (AI vs II, III and IV; all P < 0.01), specificity (AI vs I, III, V and VI; all P < 0.01), PPV (AI vs I and VI; all P < 0.01) and NPV (AI vs II, III, IV, VI; all P < 0.01).
Endoscopist |
||||||||
AI |
Senior |
I |
II |
III |
IV |
V |
VI |
|
Sensitivity |
96.0 % |
88.1 % |
96.0 % |
42.3 % |
77.1 % |
52.5 % |
87.9 % |
85.2 % |
Specificity |
88.1 % |
79.8 % |
48.0 % |
94.2 % |
58.8 % |
82.7 % |
61.7 % |
40.4 % |
PPV |
86.6 % |
84.4 % |
59.8 % |
85.4 % |
60.1 % |
70.9 % |
64.9 % |
53.5 % |
NPV |
96.4 % |
84.4 % |
93.7 % |
67.0 % |
76.1 % |
68.4 % |
86.4 % |
77.2 % |
Accuracy[1] |
91.6 % |
84.4 % |
69.4 % |
71.1 % |
67.0 % |
69.2 % |
73.4 % |
60.4 % |
AUROC[1] |
0.92 |
0.84 |
0.72 |
0.68 |
0.68 |
0.68 |
0.75 |
0.63 |
Mean confidence |
84.0 % |
94.6 % |
92.5 % |
75.4 % |
75.0 % |
85.6 % |
87.1 % |
75.5 % |
PPV, positive predictive value; NPV, negative predictive value; AUROC, area under the receiver operating characteristics curve.
1 AI is superior to all junior endoscopists in terms of accuracy and AUROC (all P < 0.01). Number in brackets refer to 95 % confidence intervals
After revealing the AI prediction results from the first part of validation set to Group A endoscopists, their performance in the second part was summarized in [Table 4]. In the second part, AI was still superior to all six endoscopists in accuracy (all P < 0.01) and AUROC (all P < 0.01). Specifically, AI was superior to individual endoscopists in terms of sensitivity (AI vs II and III, IV, V and VI; all P < 0.01), specificity (AI vs I, V and VI; all P < 0.01), PPV (AI vs I, V and VI; P < 0.01), and NPV (AI vs II, III, IV, V and VI; all P < 0.01).
Endoscopist |
||||||||
AI |
Senior |
I |
II |
III |
IV |
V |
VI |
|
Sensitivity |
98.4 % |
87.6 % |
99.6 % |
60.8 % |
73.9 % |
39.1 % |
80.8 % |
73.9 % |
Specificity |
82.4 % |
73.4 % |
51.5 % |
85.2 % |
81.5 % |
96.3 % |
55.6 % |
59.3 % |
PPV |
84.8 % |
81.5 % |
63.6 % |
77.8 % |
77.3 % |
90.0 % |
39.2 % |
60.7 % |
NPV |
98.1 %. |
99.4 % |
99.3 % |
71.9 % |
78.6 % |
65.0 % |
77.3 % |
72.7 % |
Accuracy[1] |
90.4 % |
87.6 % |
73.6 % |
74.0 % |
78.0 % |
70.0 % |
67.2 % |
66.0 % |
AUROC[1] |
0.91 |
0.90 |
0.75 |
0.73 |
0.78 |
0.68 |
0.68 |
0.67 |
Mean Confidence |
82.3 % |
94.9 % |
90.4 % |
75.6 % |
75.2 % |
78.1 % |
87.5 % |
75.3 % |
PPV, positive predictive value; NPV, negative predictive value; AUROC, area under the receiver operating characteristics curve.
1 AI is superior to all junior endoscopists in terms of accuracy and AUROC (all P < 0.01). Number in brackets refer to 95 % confidence intervals.
The performance of the Group A endoscopists, to whom the AI prediction results from the first part of the validation set had been revealed, significantly improved in accuracy on the second part of the validation set (69.3 % to 74.7 %; P = 0.003), AUROC (0.69 to 0.75, P = 0.018), sensitivity (72.0 % to 82.7 %, P = 0.049) and NPV (74.7 % to 82.5 % P = 0.003). However, Group B endoscopists, who were unaware of the AI findings, significantly improved oinly in specificity (61.6 % to 70.4, P < 0.001) but worsened in sensitivity (75.1 % to 64.6 % P < 0.001) ([Table 5]). AI was better than the senior endoscopist in accuracy in the first part of the validation set (91.6 % vs 84.4 %, P < 0.01) and AUROC (0.92 vs 0.84, P < 0.01), but not in the second part of the validation set.
PPV, positive predictive value; NPV, negative predictive value; AUROC, area under the operator characteristics curve.
#
#
Discussion
We have developed an AI image classifier for characterization of gastric neoplastic lesions that is based on non-optical magnified endoscopic images obtained by NBI. The trained AI could achieve accuracy of > 90 % and sensitivity of > 97 % in predicting presence of neoplastic lesions, which was superior to all six junior endoscopists. Through feedback with AI prediction results, junior endoscopists showed significant improvement in predicting presence of neoplasia in gastric lesions in the second part of the validation study. In contrast, those who did not receive feedback from AI showed no improvement in accuracy of prediction and even worsened in sensitivity, further suggesting that AI feedback may shorten the learning curve for prediction of histology. In contrast, experienced endoscopist seemed to catch up quickly in the second part of the validation set in achieving performance comparable to the AI prediction.
Unlike most endoscopy centers in the rest of the world, those in Japan have ample experience in characterizing gastric neoplastic lesions. With the availability of trained AI, instant prediction of gastric lesion histology may be possible. More importantly, AI could also help to shorten the learning curve for less experienced endoscopists by providing immediate feedback like a virtual supervisor. Although there were initial concerns about the dependency of AI technology leading to deterioration of learned skills [16] [17], our study findings may suggest the opposite.
Traditionally, presence of a neoplastic lesion can be predicted by magnifying endoscopy with presence of a demarcation line together with irregular microvascular (MV) and microsurface (MS) pattern [4] [18]. With increasing use of high-definition endoscopic imaging, high-quality images can also be achieved with a non-magnifying endoscopy series by changing the depth of field of observation (e. g. near focus function), which can mimic the traditional optical magnifying image [19] . Use of NBI endoscopic images also helps to characterize endoscopic lesions better than white light endoscopy by AI [14]. The AI image classifier has a distinct advantage in analyzing these images with high accuracy and it is not surprising to find that a trained AI can differentiate the histology of gastric lesions better than trainee endoscopists. In fact, previous studies showed that the performance of AI was comparable to that of experts but did not exceed it [20] [21].
Another important observation was that the AI had more confidence in prediction of non-neoplastic lesions than neoplastic lesions. For non-neoplastic lesions, the MS and MV patterns were usually regular and variations were usually minimal when compared to neoplastic lesions [18]. Therefore, AI is more confident in predicting non-neoplastic lesions.
Our trained AI, which was based on still endoscopic images, will be very useful in further development of real-time AI diagnosis of gastric lesions. Given the high NPV (> 97 %), a negative response from AI would favor simple biopsy rather than complete resection of lesions. Moreover, AI can also be very useful in selection of the site of biopsy of a lesion. Traditionally, multiple biopsies have to be taken on a lesion to minimize sampling error but AI can identify the exact biopsy site for the best diagnostic yield. Because our AI image classifier is based on images from the readily available non-magnifying endoscopy system, it can be easily incorporated into an existing system without need of major equipment change.
This study has limitations. First, it is retrospective and the lesions were not a consecutive series, which could suffer from selection bias, particularly in selection of training and validation endoscopic images. Our AI image classifier analyzed static images, which were usually taken by endoscopists experienced in image-enhanced endoscopy. Second, inexperienced endoscopist may have a sampling issue by not choosing the correct region of interest of the lesions for AI interpretation, which may result in lower accuracy. Hence, a prospective real-time study involving endoscopists with variable experience is needed to validate our findings. Third, the current study focused on characterization rather than detection of gastric lesions. Because early gastric lesions can be very subtle, an endoscopist still needs to be able to identify the lesion prior to application of AI. However, application of AI for suspected lesions would take less time obtaining multiple biopsies and may potentially increase detection of subtle lesions that might otherwise not be biopsied.
#
Conclusion
We have developed an accurate AI image classifier for prediction of histology of gastric lesions based on non-magnified endoscopic images. The trained AI is better than junior endoscopists for histological prediction and it can also help speed the learning curve of junior endoscopists inb histological characterization of gastric lesions.
#
#
Competing interests
None
-
References
- 1 Fitzmaurice C, Allen C, Barber RM. et al. Global, regional, and national cancer incidence, mortality, years of life lost, years lived with disability, and disability-adjusted life-years for 32 cancer groups, 1990 to 2015: a systematic analysis for the global burden of disease study global burden. JAMA Oncol 2017; 3: 524-548
- 2 Yokota T, Ishiyama S, Saito T. et al. Lymph node metastasis as a significant prognostic factor in gastric cancer: A multiple logistic regression analysis. Scand J Gastroenterol 2004; 39: 380-384
- 3 Zheng Z, Liu Y, Bu Z. et al. Prognostic role of lymph node metastasis in early gastric cancer. Chinese J Cancer Res 2014; 26: 192-199
- 4 Kaise M. Advanced endoscopic imaging for early gastric cancer. Green J. (ed.) Best Pract Res Clin Gastroenterol 2015; 29: 575-587
- 5 Wang L, Huang W, Du J. et al. Diagnostic yield of the light blue crest sign in gastric intestinal metaplasia: A meta-analysis. PLoS One 2014; 9: e92874
- 6 Dinis-Ribeiro M, DaCosta-Pereira A, Lopes C. et al. Magnification chromoendoscopy for the diagnosis of gastric intestinal metaplasia and dysplasia. Gastrointest Endosc 2003; 57: 498-504
- 7 Morales TG, Bhattacharyya A, Camargo E. et al. Methylene blue staining for intestinal metaplasia of the gastric cardia with follow-up for dysplasia. Gastrointest Endosc 1998; 48: 26-32
- 8 Yao K, Anagnostopoulos GK, Ragunath K. Magnifying endoscopy for diagnosing and delineating early gastric cancer. Endoscopy 2009; 41: 462-467
- 9 Chai N-L, Ling-Hu E-Q, Morita Y. et al. Magnifying endoscopy in upper gastroenterology for assessing lesions before completing endoscopic removal. World J Gastroenterol 2012; 18: 1295-1307
- 10 Gotoda T, Ho K-Y, Soetikno R. et al. Gastric ESD: current status and future directions of devices and training. Gastrointest Endosc Clin N Am 2014; 24: 213-233
- 11 Maekawa A, Kato M, Nakamura T. et al. Incidence of gastric adenocarcinoma among lesions diagnosed as low‐grade adenoma/dysplasia on endoscopic biopsy: A multicenter, prospective, observational study. Dig Endosc 2018; 30: 228-235
- 12 Hirasawa T, Aoyama K, Tanimoto T. et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018; 21: 653-660
- 13 Zhu Y, Wang Q-C, Xu M-D. et al. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest Endosc 2019; 89: 806-815.e1
- 14 Lui T, Wong K, Mak L. et al. Endoscopic prediction of deeply submucosal invasive carcinoma with use of artificial intelligence. Endosc Int Open 2019; 07: E514-E520
- 15 Brambilla E, Travis WD, Colby TV. et al. The new World Health Organization classification of lung tumours. Eur Respir J 2001; 18: 1059-1068
- 16 Rahwan I, Cebrian M, Obradovich N. et al. Machine behaviour. Nature 2019; 568: 477-486
- 17 Coiera E, Kocaballi B, Halamaka J. et al. The price of artificial intelligence. IMIA Yearb Med Informatics 2019; 1: 1-2
- 18 Yao K, Anagnostopoulos GK, Ragunath K. Magnifying endoscopy for diagnosing and delineating early gastric cancer. Endoscopy 2009; 41: 462-467
- 19 Goda K, Dobashi A, Yoshimura N. et al. Dual-focus versus conventional magnification endoscopy for the diagnosis of superficial squamous neoplasms in the pharynx and esophagus: A randomized trial. Endoscopy 2016; 48: 321-329
- 20 Byrne MF, Chapados N, Soudan F. et al. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model. Gut 2019; 68: 94-100
- 21 Mori Y, Kudo SE, Misawa M. et al. Real-time use of artificial intelligence in identification of diminutive polyps during colonoscopy a prospective study. Ann Intern Med 2018; 169: 357-366
Corresponding author
-
References
- 1 Fitzmaurice C, Allen C, Barber RM. et al. Global, regional, and national cancer incidence, mortality, years of life lost, years lived with disability, and disability-adjusted life-years for 32 cancer groups, 1990 to 2015: a systematic analysis for the global burden of disease study global burden. JAMA Oncol 2017; 3: 524-548
- 2 Yokota T, Ishiyama S, Saito T. et al. Lymph node metastasis as a significant prognostic factor in gastric cancer: A multiple logistic regression analysis. Scand J Gastroenterol 2004; 39: 380-384
- 3 Zheng Z, Liu Y, Bu Z. et al. Prognostic role of lymph node metastasis in early gastric cancer. Chinese J Cancer Res 2014; 26: 192-199
- 4 Kaise M. Advanced endoscopic imaging for early gastric cancer. Green J. (ed.) Best Pract Res Clin Gastroenterol 2015; 29: 575-587
- 5 Wang L, Huang W, Du J. et al. Diagnostic yield of the light blue crest sign in gastric intestinal metaplasia: A meta-analysis. PLoS One 2014; 9: e92874
- 6 Dinis-Ribeiro M, DaCosta-Pereira A, Lopes C. et al. Magnification chromoendoscopy for the diagnosis of gastric intestinal metaplasia and dysplasia. Gastrointest Endosc 2003; 57: 498-504
- 7 Morales TG, Bhattacharyya A, Camargo E. et al. Methylene blue staining for intestinal metaplasia of the gastric cardia with follow-up for dysplasia. Gastrointest Endosc 1998; 48: 26-32
- 8 Yao K, Anagnostopoulos GK, Ragunath K. Magnifying endoscopy for diagnosing and delineating early gastric cancer. Endoscopy 2009; 41: 462-467
- 9 Chai N-L, Ling-Hu E-Q, Morita Y. et al. Magnifying endoscopy in upper gastroenterology for assessing lesions before completing endoscopic removal. World J Gastroenterol 2012; 18: 1295-1307
- 10 Gotoda T, Ho K-Y, Soetikno R. et al. Gastric ESD: current status and future directions of devices and training. Gastrointest Endosc Clin N Am 2014; 24: 213-233
- 11 Maekawa A, Kato M, Nakamura T. et al. Incidence of gastric adenocarcinoma among lesions diagnosed as low‐grade adenoma/dysplasia on endoscopic biopsy: A multicenter, prospective, observational study. Dig Endosc 2018; 30: 228-235
- 12 Hirasawa T, Aoyama K, Tanimoto T. et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018; 21: 653-660
- 13 Zhu Y, Wang Q-C, Xu M-D. et al. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest Endosc 2019; 89: 806-815.e1
- 14 Lui T, Wong K, Mak L. et al. Endoscopic prediction of deeply submucosal invasive carcinoma with use of artificial intelligence. Endosc Int Open 2019; 07: E514-E520
- 15 Brambilla E, Travis WD, Colby TV. et al. The new World Health Organization classification of lung tumours. Eur Respir J 2001; 18: 1059-1068
- 16 Rahwan I, Cebrian M, Obradovich N. et al. Machine behaviour. Nature 2019; 568: 477-486
- 17 Coiera E, Kocaballi B, Halamaka J. et al. The price of artificial intelligence. IMIA Yearb Med Informatics 2019; 1: 1-2
- 18 Yao K, Anagnostopoulos GK, Ragunath K. Magnifying endoscopy for diagnosing and delineating early gastric cancer. Endoscopy 2009; 41: 462-467
- 19 Goda K, Dobashi A, Yoshimura N. et al. Dual-focus versus conventional magnification endoscopy for the diagnosis of superficial squamous neoplasms in the pharynx and esophagus: A randomized trial. Endoscopy 2016; 48: 321-329
- 20 Byrne MF, Chapados N, Soudan F. et al. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model. Gut 2019; 68: 94-100
- 21 Mori Y, Kudo SE, Misawa M. et al. Real-time use of artificial intelligence in identification of diminutive polyps during colonoscopy a prospective study. Ann Intern Med 2018; 169: 357-366