Cited 6 time in
ESSN: Enhanced Semantic Segmentation Network by Residual Concatenation of Feature Maps
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Dong Seop | - |
| dc.contributor.author | Arsalan, Muhammad | - |
| dc.contributor.author | Owais, Muhammad | - |
| dc.contributor.author | Park, Kang Ryoung | - |
| dc.date.accessioned | 2024-08-08T06:01:27Z | - |
| dc.date.available | 2024-08-08T06:01:27Z | - |
| dc.date.issued | 2020 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/18742 | - |
| dc.description.abstract | Semantic segmentation performs pixel-level classification of multiple classes in the input image. Previous studies on semantic segmentation have used various methods such as multi-scale image, encoder-decoder, attention, spatial pyramid pooling, conditional random field, and generative models. However, the contexts of various sizes and types in diverse environments make their performance limited in robustly detecting and classifying objects. To address this problem, we propose an enhanced semantic segmentation network (ESSN) robust to various objects, contexts, and environments. The ESSN can extract multi-scale information well by concatenating the residual feature maps with various receptive fields extracted from sequential convolution blocks, and it can improve the performance of semantic segmentation without additional modules such as loss or attention during the training process. We performed the experiments with two open databases, the Stanford background dataset (SBD) and Cambridge-driving labeled video database (CamVid). Experimental results demonstrated the pixel acc. of 92.74%, class acc. of 79.66%, and mIoU of 71.67% with CamVid, and pixel acc. of 87.46%, class acc. of 81.51%, and mIoU of 71.56% with SBD, which are higher than those of the existing state-of-the-art methods. In addition, the average processing time were 31.12 ms and 92.46 ms on the desktop computer and Jetson TX2 embedded system, respectively, which confirmed that ESSN is applicable to both the desktop computer and Jetson TX2 embedded system which is widely used in autonomous vehicles. | - |
| dc.format.extent | 17 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
| dc.title | ESSN: Enhanced Semantic Segmentation Network by Residual Concatenation of Feature Maps | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2020.2969442 | - |
| dc.identifier.scopusid | 2-s2.0-85079331754 | - |
| dc.identifier.wosid | 000525391900061 | - |
| dc.identifier.bibliographicCitation | IEEE ACCESS, v.8, pp 21363 - 21379 | - |
| dc.citation.title | IEEE ACCESS | - |
| dc.citation.volume | 8 | - |
| dc.citation.startPage | 21363 | - |
| dc.citation.endPage | 21379 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordPlus | RECOGNITION | - |
| dc.subject.keywordPlus | MODEL | - |
| dc.subject.keywordAuthor | Semantic segmentation | - |
| dc.subject.keywordAuthor | pixel-level classification | - |
| dc.subject.keywordAuthor | residual concatenation of feature maps | - |
| dc.subject.keywordAuthor | sequential convolution blocks | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
