Generating Multi-View Action Data from a Monocular Camera Video by Fusing Human Mesh Recovery and 3D Scene Reconstructionopen access
- Authors
- Kim, Hyunsu; Son, Yunsik
- Issue Date
- Sep-2025
- Publisher
- MDPI
- Keywords
- multi-view data synthesis; human action recognition; human mesh recovery; 3D scene reconstruction; temporal consistency; data augmentation
- Citation
- Applied Sciences, v.15, no.19, pp 1 - 22
- Pages
- 22
- Indexed
- SCIE
SCOPUS
- Journal Title
- Applied Sciences
- Volume
- 15
- Number
- 19
- Start Page
- 1
- End Page
- 22
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/61903
- DOI
- 10.3390/app151910372
- ISSN
- 2076-3417
2076-3417
- Abstract
- Multi-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view 3D human action data from a single monocular video. The proposed framework first predicts 3D human parameters from each video frame using a deep learning-based Human Mesh Recovery (HMR) model. Subsequently, it applies tracking, linear interpolation, and Kalman filtering to refine temporal consistency and produce naturalistic motion. The refined human meshes are then reconstructed into a virtual 3D scene by estimating a stable floor plane for alignment, and finally, novel-view videos are rendered using user-defined virtual cameras. As a result, the framework successfully generated multi-view data with realistic, jitter-free motion from a single video input. To assess fidelity to the original motion, we used Root Mean Square Error (RMSE) and Mean Per Joint Position Error (MPJPE) as metrics, achieving low average errors in both 2D (RMSE: 0.172; MPJPE: 0.202) and 3D (RMSE: 0.145; MPJPE: 0.206) space. PSEW provides an efficient, scalable, and low-cost solution that overcomes the limitations of traditional data collection methods, offering a remedy for the scarcity of training data for action recognition models.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ETC > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.