Subscribe to RSS
DOI: 10.1055/a-1346-7455
Artificial intelligence identifies and quantifies colonoscopy blind spots
Some mucosal surfaces on folds and within haustra may not be visualized at all during colonoscopy, and thus precancerous lesions are missed. We describe here preliminary results of an artificial intelligence (AI) technology that identifies and quantifies these “blind spots,” with the aim, potentially, that endoscopists may be directed to them in real time.
Anonymized colonoscopy videos were acquired from a single endoscopist with a high (47 %) adenoma detection rate (ADR), using Olympus CF and PCF devices (Olympus USA, Valley Forge, Pennsylvania, USA) and high definition recording (Epiphan, Palo Alto, California, USA). The main AI modalities applied were a recurrent neural network (an AI type particularly suitable for sequential data), that had been trained to compute depth [1] [2] combined with visual simultaneous localization and mapping (SLAM) [3].
The AI software was applied to 76 colonoscopy video sequences from 18 patients, showing colon segments of 4–25 cm in length. This created three-dimensional (3 D) reconstructions of the colon segments and then identified blind spots, showing as holes or gaps in the reconstructions, and quantified these nonvisualized areas. The study endoscopist reviewed the reconstructions and validated that the blind spots had not been seen in the colonoscopy video. The last 12 3 D reconstructions were done in real time with the video sequence; these were 2–6 seconds in duration, corresponding to colon lengths of 7–25 cm (median 10 cm).
[Video 1] shows a colonoscopy video alongside the real-time reconstruction highlighting a blind spot. [Fig. 1] is an example of a reconstructed segment of colon with no blind spot, while [Fig. 2] shows a sequence of video images where the left wall of the colon was not visualized because of the camera angle, and the reconstructions with corresponding gaps.
Video 1 Artificial-intelligence identification and quantification of colonoscopy blind spots. The video shows the colonoscopy video (left) and real-time reconstruction (center and right) which highlights the blind spot in the colon wall.
Quality:
![](https://www.thieme-connect.de/media/endoscopy/202112/thumbnails/10-1055-a-1346-7455-i2075ev1.jpg)
![Zoom Image](/products/assets/desktop/css/img/icon-figure-zoom.png)
![](https://www.thieme-connect.de/media/endoscopy/202112/thumbnails/10-1055-a-1346-7455-i2075ev2.jpg)
![Zoom Image](/products/assets/desktop/css/img/icon-figure-zoom.png)
Our system calculated that among the 76 reconstructed segments, the blind spots ranged from 1 % to 50 % of the total surface area (interquartile range 8.7 %–27 %), with a median 19 % of surface area being missed despite the high ADR of the operator. [Fig. 3] shows the distribution of the differing missed area percentages among the 76 reconstructed colonic segments.
![](https://www.thieme-connect.de/media/endoscopy/202112/thumbnails/10-1055-a-1346-7455-i2075ev3.gif)
![Zoom Image](/products/assets/desktop/css/img/icon-figure-zoom.png)
Endoscopy_UCTN_Code_TTT_1AQ_2AB
Endoscopy E-Videos is a free access online section, reporting on interesting cases and new techniques in gastroenterological endoscopy. All papers include a high
quality video and all contributions are
freely accessible online.
This section has its own submission
website at
https://mc.manuscriptcentral.com/e-videos
#
Competing interests
All authors are patent holders. Sarah K. McGill and Julian Rosenman have received research funding from Olympus.
-
References
- 1 Wang R, Pizer SM, Frahm J-M. Recurrent neural network for (un-)supervised learning of monocular video visual odometry and depth. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 5555-5564 Available at: https://openaccess.thecvf.com/content_CVPR_2019/html/Wang_Recurrent_Neural_Network_for_Un-Supervised_Learning_of_Monocular_Video_Visual_CVPR_2019_paper.html
- 2 Ma R, Wang R, Pizer S, Rosenman J, McGill SK, Frahm J. Real-time 3D reconstruction of colonoscopic surfaces for determining missing regions. Shen D. et al. Medical image computing and computer assisted intervention – MICCAI 2019. Lecture Notes in Computer Science, vol 11768. Cham: Springer; 2019
- 3 Engel J, Koltun V, Cremers D. Direct sparse odometry. IEEE Trans Pattern Anal Mach Intell 2018; 40: 611-625
Corresponding author
Publication History
Article published online:
04 February 2021
© 2021. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Wang R, Pizer SM, Frahm J-M. Recurrent neural network for (un-)supervised learning of monocular video visual odometry and depth. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 5555-5564 Available at: https://openaccess.thecvf.com/content_CVPR_2019/html/Wang_Recurrent_Neural_Network_for_Un-Supervised_Learning_of_Monocular_Video_Visual_CVPR_2019_paper.html
- 2 Ma R, Wang R, Pizer S, Rosenman J, McGill SK, Frahm J. Real-time 3D reconstruction of colonoscopic surfaces for determining missing regions. Shen D. et al. Medical image computing and computer assisted intervention – MICCAI 2019. Lecture Notes in Computer Science, vol 11768. Cham: Springer; 2019
- 3 Engel J, Koltun V, Cremers D. Direct sparse odometry. IEEE Trans Pattern Anal Mach Intell 2018; 40: 611-625
![](https://www.thieme-connect.de/media/endoscopy/202112/thumbnails/10-1055-a-1346-7455-i2075ev1.jpg)
![Zoom Image](/products/assets/desktop/css/img/icon-figure-zoom.png)
![](https://www.thieme-connect.de/media/endoscopy/202112/thumbnails/10-1055-a-1346-7455-i2075ev2.jpg)
![Zoom Image](/products/assets/desktop/css/img/icon-figure-zoom.png)
![](https://www.thieme-connect.de/media/endoscopy/202112/thumbnails/10-1055-a-1346-7455-i2075ev3.gif)
![Zoom Image](/products/assets/desktop/css/img/icon-figure-zoom.png)