Cited 0 time in
Semantic-Guided Spatial and Temporal Fusion Framework for Enhancing Monocular Video Depth Estimation
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Hyunsu | - |
| dc.contributor.author | Lee, Yeongseop | - |
| dc.contributor.author | Ko, Hyunseong | - |
| dc.contributor.author | Jeong, Junho | - |
| dc.contributor.author | Son, Yunsik | - |
| dc.date.accessioned | 2026-01-20T01:30:16Z | - |
| dc.date.available | 2026-01-20T01:30:16Z | - |
| dc.date.issued | 2026-01 | - |
| dc.identifier.issn | 2076-3417 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/63469 | - |
| dc.description.abstract | Despite advancements in deep learning-based Monocular Depth Estimation (MDE), applying these models to video sequences remains challenging due to geometric ambiguities in texture-less regions and temporal instability caused by independent per-frame inference. To address these limitations, we propose STF-Depth, a novel post-processing framework that enhances depth quality by logically fusing heterogeneous information-geometric, semantic, and panoptic-without requiring additional retraining. Our approach introduces a robust RANSAC-based Vanishing Point Estimation to guide Dynamic Depth Gradient Correction for background separation, alongside Adaptive Instance Re-ordering to clarify occlusion relationships. Experimental results on the KITTI, NYU Depth V2, and TartanAir datasets demonstrate that STF-Depth functions as a universal plug-and-play module. Notably, it achieved a 25.7% reduction in Absolute Relative error (AbsRel) and significantly enhanced temporal consistency compared to state-of-the-art backbone models. These findings confirm the framework's practicality for real-world applications requiring geometric precision and video stability, such as autonomous driving, robotics, and augmented reality (AR). | - |
| dc.format.extent | 26 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Semantic-Guided Spatial and Temporal Fusion Framework for Enhancing Monocular Video Depth Estimation | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/app16010212 | - |
| dc.identifier.scopusid | 2-s2.0-105027319326 | - |
| dc.identifier.wosid | 001657163400001 | - |
| dc.identifier.bibliographicCitation | Applied Sciences, v.16, no.1, pp 1 - 26 | - |
| dc.citation.title | Applied Sciences | - |
| dc.citation.volume | 16 | - |
| dc.citation.number | 1 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 26 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Chemistry | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Materials Science | - |
| dc.relation.journalResearchArea | Physics | - |
| dc.relation.journalWebOfScienceCategory | Chemistry, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Materials Science, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
| dc.subject.keywordAuthor | monocular video depth estimation | - |
| dc.subject.keywordAuthor | heterogeneous information fusion | - |
| dc.subject.keywordAuthor | temporal consistency | - |
| dc.subject.keywordAuthor | semantic and panoptic segmentation | - |
| dc.subject.keywordAuthor | vanishing point estimation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
