Cited 0 time in
Knowledge distillation for super-resolution reconstruction and segmentation in forward-facing camera images
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Yong Ho | - |
| dc.contributor.author | Ryu, Kyung Bong | - |
| dc.contributor.author | Jeong, Min Su | - |
| dc.contributor.author | Jeong, Seong In | - |
| dc.contributor.author | Song, Hyun Woo | - |
| dc.contributor.author | Park, Kang Ryoung | - |
| dc.date.accessioned | 2026-03-04T02:30:19Z | - |
| dc.date.available | 2026-03-04T02:30:19Z | - |
| dc.date.issued | 2026-05 | - |
| dc.identifier.issn | 1568-4946 | - |
| dc.identifier.issn | 1872-9681 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/63859 | - |
| dc.description.abstract | Semantic segmentation in vehicular vision faces a critical multi-objective challenge: maintaining high accuracy, especially for distant and small objects, while meeting the stringent low computational cost requirements of on-vehicle systems. We focused on improving perception accuracy in simulated adverse conditions where low-resolution (LR) images lead to poor recognition of small or far-distance objects. To address this, we propose knowledge distillation for super-resolution reconstruction and semantic segmentation (KD4SRSS), a novel end-to-end framework combining super-resolution reconstruction (SR) and semantic segmentation. KD4SRSS utilizes the proposed lightweight spatial boundary-aware network (SBANet), which requires only 147,312 parameters, and introduces the boundary-aware knowledge distillation (BAKD) method. BAKD efficiently transfers semantic and crucial boundary knowledge from a robust Teacher network to the Student SBANet, enabling boundary-centric SR at minimal computational expense. Experiments on the Cambridge-driving labeled video database (CamVid) and mini-database of the Cityscapes (MiniCity) datasets confirm KD4SRSS's superior performance: it achieved mean intersection over union (mIoU) scores of 64.42 % and 35.79 % respectively, representing a significant improvement of 3.02 % (CamVid) and 4.74 % (MiniCity) over the state-of-the-art (SOTA) baseline. This performance validates KD4SRSS as an optimal, robust solution for real-time applications in resource-constrained intelligent vehicle systems. © 2026 Elsevier B.V. | - |
| dc.format.extent | 31 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Elsevier Ltd | - |
| dc.title | Knowledge distillation for super-resolution reconstruction and segmentation in forward-facing camera images | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.asoc.2026.114860 | - |
| dc.identifier.scopusid | 2-s2.0-105030440659 | - |
| dc.identifier.wosid | 001696184800001 | - |
| dc.identifier.bibliographicCitation | Applied Soft Computing, v.193, pp 1 - 31 | - |
| dc.citation.title | Applied Soft Computing | - |
| dc.citation.volume | 193 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 31 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
| dc.subject.keywordAuthor | Forward-facing camera images | - |
| dc.subject.keywordAuthor | Knowledge distillation | - |
| dc.subject.keywordAuthor | Semantic segmentation | - |
| dc.subject.keywordAuthor | Spatial boundary-aware network | - |
| dc.subject.keywordAuthor | Super-resolution reconstruction | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
