Cited 0 time in
Generating Multi-View Action Data from a Monocular Camera Video by Fusing Human Mesh Recovery and 3D Scene Reconstruction
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Hyunsu | - |
| dc.contributor.author | Son, Yunsik | - |
| dc.date.accessioned | 2025-10-28T06:30:15Z | - |
| dc.date.available | 2025-10-28T06:30:15Z | - |
| dc.date.issued | 2025-09 | - |
| dc.identifier.issn | 2076-3417 | - |
| dc.identifier.issn | 2076-3417 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/61903 | - |
| dc.description.abstract | Multi-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view 3D human action data from a single monocular video. The proposed framework first predicts 3D human parameters from each video frame using a deep learning-based Human Mesh Recovery (HMR) model. Subsequently, it applies tracking, linear interpolation, and Kalman filtering to refine temporal consistency and produce naturalistic motion. The refined human meshes are then reconstructed into a virtual 3D scene by estimating a stable floor plane for alignment, and finally, novel-view videos are rendered using user-defined virtual cameras. As a result, the framework successfully generated multi-view data with realistic, jitter-free motion from a single video input. To assess fidelity to the original motion, we used Root Mean Square Error (RMSE) and Mean Per Joint Position Error (MPJPE) as metrics, achieving low average errors in both 2D (RMSE: 0.172; MPJPE: 0.202) and 3D (RMSE: 0.145; MPJPE: 0.206) space. PSEW provides an efficient, scalable, and low-cost solution that overcomes the limitations of traditional data collection methods, offering a remedy for the scarcity of training data for action recognition models. | - |
| dc.format.extent | 22 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Generating Multi-View Action Data from a Monocular Camera Video by Fusing Human Mesh Recovery and 3D Scene Reconstruction | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/app151910372 | - |
| dc.identifier.scopusid | 2-s2.0-105031673458 | - |
| dc.identifier.wosid | 001593466900001 | - |
| dc.identifier.bibliographicCitation | Applied Sciences, v.15, no.19, pp 1 - 22 | - |
| dc.citation.title | Applied Sciences | - |
| dc.citation.volume | 15 | - |
| dc.citation.number | 19 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 22 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Chemistry | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Materials Science | - |
| dc.relation.journalResearchArea | Physics | - |
| dc.relation.journalWebOfScienceCategory | Chemistry, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Materials Science, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
| dc.subject.keywordAuthor | multi-view data synthesis | - |
| dc.subject.keywordAuthor | human action recognition | - |
| dc.subject.keywordAuthor | human mesh recovery | - |
| dc.subject.keywordAuthor | 3D scene reconstruction | - |
| dc.subject.keywordAuthor | temporal consistency | - |
| dc.subject.keywordAuthor | data augmentation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
