Endoscopy 2021; 53(09): 884-885
DOI: 10.1055/a-1352-4811
Editorial

As far as the AI can see

Referring to Ebigbo A et al. p. 878–883
Cadman L. Leggett
Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, Minnesota, United States
,
Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, Minnesota, United States
› Author Affiliations

Staging of early esophageal adenocarcinoma (EAC) involves a complex process associated with profound clinical implications. Central to this process is the histologic evaluation of depth of invasion. Cancer infiltration limited to the mucosa (T1a) is associated with a lower risk of lymph node metastasis and overall improved survival compared with infiltration that extends into the submucosa (T1b) [1]. Consequently, endoscopic therapy can be considered curative in a subset of patients with T1a EAC but plays a limited role in patients with T1b EAC for which esophagectomy is associated with improved overall survival [2]. Ideally, staging of T1 EAC would be performed without the need for resection, avoiding an unnecessary intervention in patients with T1b EAC and thus preventing associated complications of esophageal perforation, bleeding, and stricture formation [3] [4] [5]. Unfortunately, imaging modalities, including endoscopic ultrasound, have demonstrated suboptimal performance in differentiating T1a from T1b EAC [6]. As a result, the endoscopist performing the staging procedure plays a fundamental role in determining whether an early-stage cancer can be safely resected and the appropriate technique to do so. In this process, the morphology, size, and adequate lift of the lesion away from the muscularis propria are taken into consideration. These factors are of critical importance not only to ensure adequate resection of the cancer but to mitigate the risk of complications. Most often, however, the only way to truly know if a lesion can be resected is to attempt its resection.

Artificial intelligence (AI) is a broad term that is commonly used in reference to deep-learning algorithms capable of extracting meaningful patterns from complex data. The number of AI applications in endoscopy has increased exponentially over recent years. Computer-aided detection and diagnostic algorithms are being applied to various gastrointestinal disorders including Barrett’s esophagus and squamous cell esophageal cancer [7] [8]. In this issue of Endoscopy, an internationally prominent group of Barrett’s investigators with established expertise in the application of AI discuss a deep-learning algorithm to differentiate T1a from T1b EAC [9]. The algorithm was trained and validated using single-frame endoscopy images of T1 EAC lesions with histology from endoscopic resection specimens as the gold standard for diagnosis. A moderate and roughly equal number of training sets for T1a and T1b cancers were used and reasonably accurate data were generated. The accuracy of the AI algorithm was 0.71, with a sensitivity of 0.77 and specificity of 0.64. This was similar to the performance of five expert endoscopists, who demonstrated accuracy of 0.63 but notably with moderate interobserver agreement.

“The performance of this algorithm is quite impressive if we take into account that its input data consist of unselected single-frame images.”

The performance of this algorithm is quite impressive if we take into account that its input data consist of unselected single-frame images. The authors plan to improve the diagnostic capability of the system by implementing it in a real-life endoscopy setting. Using this approach, the algorithm may be capable of providing a more accurate classification of a lesion that is based on the analysis of several images extracted from video data. However, in doing so the algorithm may also reach contradictory results if images are discrepant – an important consideration given the heterogeneous morphology associated with early EAC. A potential solution to this problem is to provide the algorithm with images carefully annotated for features that potentially distinguish T1 EAC cancers. But what are these features? This study describes an interobserver agreement value that is far lower than the diagnostic accuracy among experts, suggesting that there is lack of consensus on the endoscopic characteristics that distinguish T1 cancers. However, the performance of the algorithm – and the experts – would suggest that these characteristics do in fact exist, prompting the question of whether we can learn to identify them by using the algorithm – an AI technique often referred to as “explainability.” However, in this process, we must be careful not to introduce our own clinical bias. For example, in this study the training set consisted of images considered to be the most representative of the cancer lesion by the endoscopist performing the procedure; however, the AI algorithm may not choose the same image! Ultimately, the true diagnostic value of this approach lies in the combined performance of the endoscopist and the algorithm, by comparing and contrasting our own interpretation with that of the machine.

When it comes to AI algorithms it is also important not to lose site of the gold standard. If histology is our current gold standard for staging T1 EAC, can an endoscopy-based AI algorithm truly diagnose depth of invasion even when present focally within a lesion? In this context, it is possible that the performance of this algorithm may reach a diagnostic threshold that is limited by its own data input: this is as far as our AI can “see.” However, deep-learning algorithms are notoriously good at pattern recognition, particularly when provided with different data sources. It is not unimaginable that a deep-learning algorithm that jointly uses endoscopy and endoscopic ultrasound data will provide the diagnostic performance necessary to confidently stage T1 EAC without the need for resection. Perhaps we may be capable of training such an algorithm to predict other histologic variables including grade of differentiation and lymphovascular invasion, which are also associated with clinical outcomes in this patient population [10]. Ultimately, the endoscopist performing the procedure has to decide whether resection is indicated, how to approach the lesion for resection, and what technique to use. To do so, this algorithm will have to be equipped with AI tools including lesion margin delineation and a heatmap of depth of invasion that outlines the area of the lesion most likely to extend into the submucosa.

Some may view the application of AI interpretation to endoscopic imaging as a substitute for the human eye. This excellent preliminary study goes beyond this concept to remind us of the human element needed to train the program, the dependency of initial AI success on the images obtained and selected by humans, and the great potential to use our eyes and AI as complementary means of diagnosis. Most exciting to witness is how the authors of this study are pushing the boundaries of endoscopic diagnosis using AI. Staging of T1 EAC is challenging not because we do not have the right tools to do so but because it is a complex and meticulous process. This is where AI solutions thrive – by simplifying this process and identifying new patterns that increase performance. This excellent preliminary study will serve as the framework to build a comprehensive platform for staging T1 EAC, guiding the endoscopist every step of the way.



Publication History

Article published online:
26 August 2021

© 2021. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

 
  • References

  • 1 Dunbar KB, Spechler SJ. The risk of lymph-node metastases in patients with high-grade dysplasia or intramucosal carcinoma in Barrett’s esophagus: a systematic review. Am J Gastroenterol 2012; 107: 850-862
  • 2 Otaki F, Ma GK, Krigel A. et al. Outcomes of patients with submucosal (T1b) esophageal adenocarcinoma: a multicenter cohort study. Gastrointest Endosc 2020; 92: 31-39
  • 3 Genere JR, Priyan H, Sawas T. et al. Safety and histologic outcomes of endoscopic submucosal dissection with a novel articulating knife for esophageal neoplasia. Gastrointest Endosc 2020; 91: 797-805
  • 4 Omae M, Konradsson M, Baldaque-Silva F. Delayed perforation after endoscopic submucosal dissection treated successfully by temporary stent placement. Clin J Gastroenterol 2018; 11: 118-122
  • 5 Yang D, Zou F, Xiong S. et al. Endoscopic submucosal dissection for early Barrett’s neoplasia: a meta-analysis. Gastrointest Endosc 2018; 87: 1383-1393
  • 6 Bergeron EJ, Lin J, Chang AC. et al. Endoscopic ultrasound is inadequate to determine which T1/T2 esophageal tumors are candidates for endoluminal therapies. J Thorac Cardiovasc Surg 2014; 147: 765-771 Discussion 771–773
  • 7 de Groof AJ, Struyvenberg MR, van der Putten J. et al. Deep-learning system detects neoplasia in patients with Barrett’s esophagus with higher accuracy than endoscopists in a multistep training and validation study with benchmarking. Gastroenterology 2020; 158: 915-929
  • 8 Tokai Y, Yoshio T, Aoyama K. et al. Application of artificial intelligence using convolutional neural networks in determining the invasion depth of esophageal squamous cell carcinoma. Esophagus 2020; 17: 250-256
  • 9 Ebigbo A, Mendel R, Rückert T. et al. Endoscopic prediction of submucosal invasion in Barrett’s cancer with the use of artificial intelligence: a pilot study. Endoscopy 2021; 53: 878-883
  • 10 Leggett CL, Lewis JT, Wu TT. et al. Clinical and histologic determinants of mortality for patients with Barrett’s esophagus-related T1 esophageal adenocarcinoma. Clin Gastroenterol Hepatol 2015; 13: 658-664