Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

AI-powered hierarchical classification of ampullary neoplasms: a deep learning approach using white-light and narrow-band imagingopen access

Authors
Yoon, DanChang, Sung HoonPaik, Woo HyunKim, Chang HyunKim, Byeong SooKim, Young GyunChung, HyunsooRyu, Ji KonLee, Sang HyubCho, In RaeChoi, Seong JiKim, Joo SeongKim, SungwanChoi, Jin Ho
Issue Date
Jan-2026
Publisher
SPRINGER
Keywords
Ampulla of Vater neoplasm; Endoscopic images; Narrow-band imaging; Hierarchical classification; Deep learning
Citation
Surgical Endoscopy And Other Interventional Techniques
Indexed
SCIE
SCOPUS
Journal Title
Surgical Endoscopy And Other Interventional Techniques
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/63546
DOI
10.1007/s00464-025-12534-2
ISSN
0930-2794
1432-2218
Abstract
BackgroundEndoscopic diagnosis of Ampulla of Vater (AoV) lesions remains challenging owing to complex morphology and limited representative images, particularly for high-risk dysplastic lesions. This study aimed to develop a hierarchical deep learning framework for the stepwise classification of ampullary lesions using white-light (WL) and narrow-band endoscopic images (NBI).MethodsThe framework employs three sequential binary classifications: (1) normal vs. abnormal, (2) adenoma vs. cancer, and (3) high-grade dysplasia (HGD) vs. low-grade dysplasia (LGD) within adenomas. Each stage uses EfficientNet-B4 classifiers trained independently on WL and NBI. Predictions are integrated using confidence-based voting. To overcome data scarcity and class imbalance, for HGD and cancer, we used StyleGAN2-ADA to generate synthetic images. The hierarchical model was developed using 4244 endoscopic images from 464 patients collected at Seoul National University Hospital (2693/833/718 for train/validation/test).ResultsThe hierarchical model achieved stage-specific accuracies of 95.6% (normal vs. abnormal), 94.4% (adenoma vs. cancer), and 92.7% (LGD vs. HGD), resulting in overall diagnostic accuracy of 92.2%. The model demonstrated excellent sensitivity of 83.3% for HGD and 87.5% for cancer, with specificities exceeding 98%. The confidence-based dual-modality approach (AUROC: 0.921) significantly outperformed single-modality approaches using WL alone (AUROC: 0.866) or NBI alone (AUROC: 0.895), by integrating their complementary diagnostic strengths. Generative adversarial network-based augmentation substantially improved sensitivity for cancer (from 87.5% to 91.7%) and HGD (from 83.3% to 86.5%), while overall accuracy increased from 94.5% to 95.1%.ConclusionsA hierarchical deep learning approach integrating dual-modality imaging and synthetic data augmentation significantly improves diagnostic performance for ampullary lesions.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Medicine > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE