Cited 0 time in
Systematic Integration of Attention Modules into CNNs for Accurate and Generalizable Medical Image Classification
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ullah, Zahid | - |
| dc.contributor.author | Hong, Minki | - |
| dc.contributor.author | Mahmood, Tahir | - |
| dc.contributor.author | Kim, Jihie | - |
| dc.date.accessioned | 2025-12-10T03:01:17Z | - |
| dc.date.available | 2025-12-10T03:01:17Z | - |
| dc.date.issued | 2025-11 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/62284 | - |
| dc.description.abstract | Deep learning has demonstrated significant promise in medical image analysis; however, standard CNNs frequently encounter challenges in detecting subtle and intricate features vital for accurate diagnosis. To address this limitation, we systematically integrated attention mechanisms into five commonly used CNN backbones: VGG16, ResNet18, InceptionV3, DenseNet121, and EfficientNetB5. Each network was modified using either a Squeeze-and-Excitation block or a hybrid Convolutional Block Attention Module, allowing for more effective recalibration of channel and spatial features. We evaluated these attention-augmented models on two distinct datasets: (1) a Products of Conception histopathological dataset containing four tissue categories, and (2) a brain tumor MRI dataset that includes multiple tumor subtypes. Across both datasets, networks enhanced with attention mechanisms consistently outperformed their baseline counterparts on all measured evaluation criteria. Importantly, EfficientNetB5 with hybrid attention achieved superior overall results, with notable enhancements in both accuracy and generalizability. In addition to improved classification outcomes, the inclusion of attention mechanisms also advanced feature localization, thereby increasing robustness across a range of imaging modalities. Our study established a comprehensive framework for incorporating attention modules into diverse CNN architectures and delineated their impact on medical image classification. These results provide important insights for the development of interpretable and clinically robust deep learning-driven diagnostic systems. | - |
| dc.format.extent | 27 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Systematic Integration of Attention Modules into CNNs for Accurate and Generalizable Medical Image Classification | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/math13223728 | - |
| dc.identifier.scopusid | 2-s2.0-105023208656 | - |
| dc.identifier.wosid | 001624131300001 | - |
| dc.identifier.bibliographicCitation | Mathematics, v.13, no.22, pp 1 - 27 | - |
| dc.citation.title | Mathematics | - |
| dc.citation.volume | 13 | - |
| dc.citation.number | 22 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 27 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Mathematics | - |
| dc.relation.journalWebOfScienceCategory | Mathematics | - |
| dc.subject.keywordAuthor | squeeze and excitation | - |
| dc.subject.keywordAuthor | attention mechanism | - |
| dc.subject.keywordAuthor | convolutional neural networks | - |
| dc.subject.keywordAuthor | medical image classification | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
