Subscribe to RSS
DOI: 10.1055/s-0044-1782894
Automatic detection of pancreatic ductal adenocarcinoma in endoscopic ultrasound guided fine needle biopsy samples based on whole slide imaging using deep learning segmentation architectures
Aims Endoscopic ultrasound (EUS) guided fine needle biopsy (FNB) is the procedure of choice for the diagnosis of pancreatic ductal adenocarcinoma (PDAC). The samples obtained are small and require expertise in pathology, whilst the diagnosis is difficult in view of the scarcity of malignant cells and the important desmoplastic reaction of these tumors. Moreover, the limited availability of publicly accessible datasets containing pancreatic histopathological images has resulted in a scarcity of research on the automated detection of PDAC, especially based on whole slide imaging (WSI). In this study, a comparison of three U-Net architecture variants was performed on two different datasets of EUS-guided FNB samples from two medical centers (Craiova and Bucharest) with different parameters and acquisition tools. The obtained WSIs are multi-gigabyte images with typical resolutions of 40,000×40,000 pixels, used to train and evaluate the segmentation models.
Methods Craiova Dataset contains 31 PDAC WSIs from which we extracted 4,040 (2,473 positive, 1,567 negative) small patches of 256 x 256 pixels size. 2,940 patches extracted from 16 WSIs were used for training (2,228 patches) and validation (712 patches). 1,100 patches extracted from 5 WSIs were used for testing. Bucharest Dataset contains 33 PDAC WSIs from which we extracted 4,294 (2,909 positive, 1,385 negative) small patches of 256 x 256 pixels size. A number of 3,094 patches extracted from 16 WSIs were used for training (2,294 patches) and validation (800 patches). 1,200 patches extracted from 7 WSIs were used for testing. The three U-Net architecture variants evaluated in this paper are Inception U-Net, Vanilla U-Net and Dense U-Net. The performance is evaluated by considering the accuracy as the mean Dice coefficient and mean intersection over union (IoU), while mean epoch training time, mean evaluation time and the number of parameters show by comparison the computational complexity of each segmentation model.
Results The results suggest that the Inception U-net model with increased complexity performed best for both datasets, with an accuracy of 97.82% and an average IoU of 0.87 for Craiova Dataset, and an accuracy of 95.70% and an average IoU of 0.79 for Bucharest Dataset. The fastest to train and evaluate, Vanilla U-Net performed well in terms of accuracy and IoU for both datasets. The performance between the two datasets varies for the three segmentation models, due to the high complexity, subjectivity and quality of histological images. The tradeoff between performances, complexity, and speed is important to be considered for histological samples of very large dimensions, which require time and hardware.
Conclusions In this study, we have performed a comparison of three U-Net architectures for the automatic detection of PDAC in EUS-FNB samples based on WSI scans. The tested U-Net architectures provide excellent results for PDAC histological image segmentation, and suggest that there are differences in performance between U-Net models, using the mean Dice coefficient and mean IoU as evaluation metrics.
#
Conflicts of interest
Authors do not have any conflict of interest to disclose.
Publication History
Article published online:
15 April 2024
© 2024. European Society of Gastrointestinal Endoscopy. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany