Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

MVS-GS: High-Quality 3D Gaussian Splatting Mapping via Online Multi-View Stereoopen access

Authors
Lee, ByeonggwonPark, JunkyuGiang, Khang TruongJo, SunghoSong, Soohwan
Issue Date
2025
Publisher
IEEE
Keywords
Three-dimensional displays; Rendering (computer graphics); Neural radiance field; Simultaneous localization and mapping; Solid modeling; Real-time systems; Depth measurement; Accuracy; Image reconstruction; Computational modeling; Online multi-view stereo; 3D Gaussian splatting; neural rendering; dense SLAM; 3D modeling; depth estimation
Citation
IEEE Access, v.13, pp 111441 - 111453
Pages
13
Indexed
SCIE
SCOPUS
Journal Title
IEEE Access
Volume
13
Start Page
111441
End Page
111453
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/58664
DOI
10.1109/ACCESS.2025.3583156
ISSN
2169-3536
2169-3536
Abstract
This study addresses the challenge of online 3D model generation for neural rendering using an RGB image stream. Previous research has tackled this issue by incorporating Neural Radiance Fields (NeRF) or 3D Gaussian Splatting (3DGS) as scene representations within dense SLAM methods. However, most studies focus primarily on estimating coarse 3D scenes rather than achieving detailed reconstructions. Moreover, depth estimation based solely on images is often ambiguous, resulting in low-quality 3D models that lead to inaccurate renderings. To overcome these limitations, we propose a novel framework for high-quality 3DGS modeling that leverages an online multi-view stereo (MVS) approach. Our method estimates MVS depth using sequential frames from a local time window and applies comprehensive depth refinement techniques to filter out outliers. The refinement method produces temporally consistent depths by checking sequential geometric consistency, enabling accurate initialization of Gaussians in 3DGS. Furthermore, we introduce a parallelized backend module that optimizes the 3DGS model efficiently, ensuring timely updates with each new keyframe. Experimental results demonstrate that our method outperforms state-of-the-art dense SLAM methods, achieving an average PSNR improvement of approximately 2 dB on indoor scenes. Moreover, our method reliably produces consistent 3D models in complex outdoor scenes, where existing methods often fail due to tracking errors and depth noise. It also reconstructs large-scale aerial scenes effectively, achieving an average PSNR gain of about 10.28 dB over existing methods.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Song, Soo Hwan photo

Song, Soo Hwan
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE