Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Generating Multi-View Action Data from a Monocular Camera Video by Fusing Human Mesh Recovery and 3D Scene Reconstruction

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Hyunsu-
dc.contributor.authorSon, Yunsik-
dc.date.accessioned2025-10-28T06:30:15Z-
dc.date.available2025-10-28T06:30:15Z-
dc.date.issued2025-09-
dc.identifier.issn2076-3417-
dc.identifier.issn2076-3417-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/61903-
dc.description.abstractMulti-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view 3D human action data from a single monocular video. The proposed framework first predicts 3D human parameters from each video frame using a deep learning-based Human Mesh Recovery (HMR) model. Subsequently, it applies tracking, linear interpolation, and Kalman filtering to refine temporal consistency and produce naturalistic motion. The refined human meshes are then reconstructed into a virtual 3D scene by estimating a stable floor plane for alignment, and finally, novel-view videos are rendered using user-defined virtual cameras. As a result, the framework successfully generated multi-view data with realistic, jitter-free motion from a single video input. To assess fidelity to the original motion, we used Root Mean Square Error (RMSE) and Mean Per Joint Position Error (MPJPE) as metrics, achieving low average errors in both 2D (RMSE: 0.172; MPJPE: 0.202) and 3D (RMSE: 0.145; MPJPE: 0.206) space. PSEW provides an efficient, scalable, and low-cost solution that overcomes the limitations of traditional data collection methods, offering a remedy for the scarcity of training data for action recognition models.-
dc.format.extent22-
dc.language영어-
dc.language.isoENG-
dc.publisherMDPI-
dc.titleGenerating Multi-View Action Data from a Monocular Camera Video by Fusing Human Mesh Recovery and 3D Scene Reconstruction-
dc.typeArticle-
dc.publisher.location스위스-
dc.identifier.doi10.3390/app151910372-
dc.identifier.scopusid2-s2.0-105031673458-
dc.identifier.wosid001593466900001-
dc.identifier.bibliographicCitationApplied Sciences, v.15, no.19, pp 1 - 22-
dc.citation.titleApplied Sciences-
dc.citation.volume15-
dc.citation.number19-
dc.citation.startPage1-
dc.citation.endPage22-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaChemistry-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaMaterials Science-
dc.relation.journalResearchAreaPhysics-
dc.relation.journalWebOfScienceCategoryChemistry, Multidisciplinary-
dc.relation.journalWebOfScienceCategoryEngineering, Multidisciplinary-
dc.relation.journalWebOfScienceCategoryMaterials Science, Multidisciplinary-
dc.relation.journalWebOfScienceCategoryPhysics, Applied-
dc.subject.keywordAuthormulti-view data synthesis-
dc.subject.keywordAuthorhuman action recognition-
dc.subject.keywordAuthorhuman mesh recovery-
dc.subject.keywordAuthor3D scene reconstruction-
dc.subject.keywordAuthortemporal consistency-
dc.subject.keywordAuthordata augmentation-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Son, Yun Sik photo

Son, Yun Sik
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE