Semantic-Guided Spatial and Temporal Fusion Framework for Enhancing Monocular Video Depth Estimationopen access
- Authors
- Kim, Hyunsu; Lee, Yeongseop; Ko, Hyunseong; Jeong, Junho; Son, Yunsik
- Issue Date
- Jan-2026
- Publisher
- MDPI
- Keywords
- monocular video depth estimation; heterogeneous information fusion; temporal consistency; semantic and panoptic segmentation; vanishing point estimation
- Citation
- Applied Sciences, v.16, no.1, pp 1 - 26
- Pages
- 26
- Indexed
- SCIE
SCOPUS
- Journal Title
- Applied Sciences
- Volume
- 16
- Number
- 1
- Start Page
- 1
- End Page
- 26
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/63469
- DOI
- 10.3390/app16010212
- ISSN
- 2076-3417
- Abstract
- Despite advancements in deep learning-based Monocular Depth Estimation (MDE), applying these models to video sequences remains challenging due to geometric ambiguities in texture-less regions and temporal instability caused by independent per-frame inference. To address these limitations, we propose STF-Depth, a novel post-processing framework that enhances depth quality by logically fusing heterogeneous information-geometric, semantic, and panoptic-without requiring additional retraining. Our approach introduces a robust RANSAC-based Vanishing Point Estimation to guide Dynamic Depth Gradient Correction for background separation, alongside Adaptive Instance Re-ordering to clarify occlusion relationships. Experimental results on the KITTI, NYU Depth V2, and TartanAir datasets demonstrate that STF-Depth functions as a universal plug-and-play module. Notably, it achieved a 25.7% reduction in Absolute Relative error (AbsRel) and significantly enhanced temporal consistency compared to state-of-the-art backbone models. These findings confirm the framework's practicality for real-world applications requiring geometric precision and video stability, such as autonomous driving, robotics, and augmented reality (AR).
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ETC > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.