Cited 6 time in
Semantic Segmentation of Aerial Imagery Using U-Net with Self-Attention and Separable Convolutions
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Khan, Bakht Alam | - |
| dc.contributor.author | Jung, Jin-Woo | - |
| dc.date.accessioned | 2024-08-08T12:01:01Z | - |
| dc.date.available | 2024-08-08T12:01:01Z | - |
| dc.date.issued | 2024-05 | - |
| dc.identifier.issn | 2076-3417 | - |
| dc.identifier.issn | 2076-3417 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/21965 | - |
| dc.description.abstract | This research addresses the crucial task of improving accuracy in the semantic segmentation of aerial imagery, essential for applications such as urban planning and environmental monitoring. This study emphasizes the significance of maintaining the Intersection over Union (IOU) score as a metric and employs data augmentation with the Patchify library, using a patch size of 256, to effectively augment the dataset, which is subsequently split into training and testing sets. The core of this investigation lies in a novel architecture that combines a U-Net framework with self-attention mechanisms and separable convolutions. The introduction of self-attention mechanisms enhances the model's understanding of image context, while separable convolutions expedite the training process, contributing to overall efficiency. The proposed model demonstrates a substantial accuracy improvement, surpassing the previous state-of-the-art Dense Plus U-Net, achieving an accuracy of 91% compared to the former's 86%. Visual representations, including original patch images, original masked patches, and predicted patch masks, showcase the model's proficiency in semantic segmentation, marking a significant advancement in aerial image analysis and underscoring the importance of innovative architectural elements for enhanced accuracy and efficiency in such tasks. | - |
| dc.format.extent | 15 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Semantic Segmentation of Aerial Imagery Using U-Net with Self-Attention and Separable Convolutions | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/app14093712 | - |
| dc.identifier.scopusid | 2-s2.0-85192786309 | - |
| dc.identifier.wosid | 001219846300001 | - |
| dc.identifier.bibliographicCitation | Applied Sciences, v.14, no.9, pp 1 - 15 | - |
| dc.citation.title | Applied Sciences | - |
| dc.citation.volume | 14 | - |
| dc.citation.number | 9 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 15 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Chemistry | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Materials Science | - |
| dc.relation.journalResearchArea | Physics | - |
| dc.relation.journalWebOfScienceCategory | Chemistry, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Materials Science, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
| dc.subject.keywordPlus | RESOLUTION | - |
| dc.subject.keywordPlus | SATELLITE | - |
| dc.subject.keywordPlus | NETWORK | - |
| dc.subject.keywordAuthor | semantic segmentation | - |
| dc.subject.keywordAuthor | U-Net | - |
| dc.subject.keywordAuthor | self-attention | - |
| dc.subject.keywordAuthor | separable convolutions | - |
| dc.subject.keywordAuthor | aerial imagery | - |
| dc.subject.keywordAuthor | remote sensing | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
