Cited 0 time in
Automated Detection and Grading of Renal Cell Carcinoma in Histopathological Images via Efficient Attention Transformer Network
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Hissa Al-kuwari | - |
| dc.contributor.author | Belqes Alshami | - |
| dc.contributor.author | Aisha Al-Khinji | - |
| dc.contributor.author | Haider, Adnan | - |
| dc.contributor.author | Arsalan, Muhammad | - |
| dc.date.accessioned | 2025-12-10T03:00:48Z | - |
| dc.date.available | 2025-12-10T03:00:48Z | - |
| dc.date.issued | 2025-11 | - |
| dc.identifier.issn | 2076-3271 | - |
| dc.identifier.issn | 2076-3271 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/62254 | - |
| dc.description.abstract | Background: Renal Cell Carcinoma (RCC) is the most common type of kidney cancer and requires accurate histopathological grading for effective prognosis and treatment planning. However, manual grading is time-consuming, subjective, and susceptible to inter-observer variability. Objective: This study proposes EAT-Net (Efficient Attention Transformer Network), a dual-stream deep learning model designed to automate and enhance RCC grade classification from histopathological images. Method: EAT-Net integrates EfficientNetB0 for local feature extraction and a Vision Transformer (ViT) stream for capturing global contextual dependencies. The architecture incorporates Squeeze-and-Excitation (SE) modules to recalibrate feature maps, improving focus on informative regions. The model was trained and evaluated on two publicly available datasets, KMC-RENAL and RCCG-Net. Standard preprocessing was applied, and the model's performance was assessed using accuracy, precision, recall, and F1-score. Results: EAT-Net achieved superior results compared to state-of-the-art models, with an accuracy of 92.25%, precision of 92.15%, recall of 92.12%, and F1-score of 92.25%. Ablation studies demonstrated the complementary value of the EfficientNet and ViT streams. Additionally, Grad-CAM visualizations confirmed that the model focuses on diagnostically relevant areas, supporting its interpretability and clinical relevance. Conclusion: EAT-Net offers an accurate, and explainable framework for RCC grading. Its lightweight architecture and high performance make it well-suited for clinical deployment in digital pathology workflows. | - |
| dc.format.extent | 16 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Automated Detection and Grading of Renal Cell Carcinoma in Histopathological Images via Efficient Attention Transformer Network | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/medsci13040257 | - |
| dc.identifier.scopusid | 2-s2.0-105022808621 | - |
| dc.identifier.wosid | 001647054800001 | - |
| dc.identifier.bibliographicCitation | Medical Sciences, v.13, no.4, pp 1 - 16 | - |
| dc.citation.title | Medical Sciences | - |
| dc.citation.volume | 13 | - |
| dc.citation.number | 4 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 16 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.description.journalRegisteredClass | esci | - |
| dc.relation.journalResearchArea | General & Internal Medicine | - |
| dc.relation.journalWebOfScienceCategory | Medicine, General & Internal | - |
| dc.subject.keywordAuthor | deep learning | - |
| dc.subject.keywordAuthor | efficientNet | - |
| dc.subject.keywordAuthor | histopathology | - |
| dc.subject.keywordAuthor | medical image classification | - |
| dc.subject.keywordAuthor | renal cell carcinoma | - |
| dc.subject.keywordAuthor | vision transformer | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
