Cited 21 time in
Prompt Deep Light-Weight Vessel Segmentation Network (PLVS-Net)
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Arsalan, Muhammad | - |
| dc.contributor.author | Khan, Tariq M. | - |
| dc.contributor.author | Naqvi, Syed Saud | - |
| dc.contributor.author | Nawaz, Mehmood | - |
| dc.contributor.author | Razzak, Imran | - |
| dc.date.accessioned | 2024-08-08T12:32:11Z | - |
| dc.date.available | 2024-08-08T12:32:11Z | - |
| dc.date.issued | 2023-03 | - |
| dc.identifier.issn | 1545-5963 | - |
| dc.identifier.issn | 1557-9964 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/22270 | - |
| dc.description.abstract | Achieving accurate retinal vessel segmentation is critical in the progression and diagnosis of vision-threatening diseases such as diabetic retinopathy and age-related macular degeneration. Existing vessel segmentation methods are based on encoder-decoder architectures, which frequently fail to take into account the retinal vessel structure's context in their analysis. As a result, such methods have difficulty bridging the semantic gap between encoder and decoder characteristics. This paper proposes a Prompt Deep Light-weight Vessel Segmentation Network (PLVS-Net) to address these issues by using prompt blocks. Each prompt block use combination of asymmetric kernel convolutions, depth-wise separable convolutions, and ordinary convolutions to extract useful features. This novel strategy improves the performance of the segmentation network while simultaneously decreasing the number of trainable parameters. Our method outperformed competing approaches in the literature on three benchmark datasets, including DRIVE, STARE, and CHASE. | - |
| dc.format.extent | 9 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | Prompt Deep Light-Weight Vessel Segmentation Network (PLVS-Net) | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/TCBB.2022.3211936 | - |
| dc.identifier.scopusid | 2-s2.0-85139874735 | - |
| dc.identifier.wosid | 000965674700053 | - |
| dc.identifier.bibliographicCitation | IEEE/ACM Transactions on Computational Biology and Bioinformatics, v.20, no.2, pp 1363 - 1371 | - |
| dc.citation.title | IEEE/ACM Transactions on Computational Biology and Bioinformatics | - |
| dc.citation.volume | 20 | - |
| dc.citation.number | 2 | - |
| dc.citation.startPage | 1363 | - |
| dc.citation.endPage | 1371 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Biochemistry & Molecular Biology | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Mathematics | - |
| dc.relation.journalWebOfScienceCategory | Biochemical Research Methods | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
| dc.relation.journalWebOfScienceCategory | Mathematics, Interdisciplinary Applications | - |
| dc.relation.journalWebOfScienceCategory | Statistics & Probability | - |
| dc.subject.keywordPlus | NEURAL-NETWORK | - |
| dc.subject.keywordPlus | IMAGES | - |
| dc.subject.keywordAuthor | Image segmentation | - |
| dc.subject.keywordAuthor | Feature extraction | - |
| dc.subject.keywordAuthor | Diabetes | - |
| dc.subject.keywordAuthor | Training | - |
| dc.subject.keywordAuthor | Retinopathy | - |
| dc.subject.keywordAuthor | Retinal vessels | - |
| dc.subject.keywordAuthor | Kernel | - |
| dc.subject.keywordAuthor | Deep learning | - |
| dc.subject.keywordAuthor | light-weight deep network | - |
| dc.subject.keywordAuthor | retinal vessel segmentation | - |
| dc.subject.keywordAuthor | convolutional neural networks | - |
| dc.subject.keywordAuthor | diabetic retinopathy | - |
| dc.subject.keywordAuthor | medical image segmentation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
