Cited 1 time in
DGU-HAU: A Dataset for 3D Human Action Analysis on Utterances
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Park, Jiho | - |
| dc.contributor.author | Park, Kwangryeol | - |
| dc.contributor.author | Kim, Dongho | - |
| dc.date.accessioned | 2024-09-26T16:00:49Z | - |
| dc.date.available | 2024-09-26T16:00:49Z | - |
| dc.date.issued | 2023-12 | - |
| dc.identifier.issn | 2079-9292 | - |
| dc.identifier.issn | 2079-9292 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/25729 | - |
| dc.description.abstract | Constructing diverse and complex multi-modal datasets is crucial for advancing human action analysis research, providing ground truth annotations for training deep learning networks, and enabling the development of robust models across real-world scenarios. Generating natural and contextually appropriate nonverbal gestures is essential for enhancing immersive and effective human-computer interactions in various applications. These applications include video games, embodied virtual assistants, and conversations within a metaverse. However, existing speech-related human datasets are focused on style transfer, so they have limitations that make them unsuitable for 3D human action analysis studies, such as human action recognition and generation. Therefore, we introduce a novel multi-modal dataset, DGU-HAU, a dataset for 3D human action on utterances that commonly occurs during daily life. We validate the dataset using a human action generation model, Action2Motion (A2M), a state-of-the-art 3D human action generation model. | - |
| dc.format.extent | 15 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | DGU-HAU: A Dataset for 3D Human Action Analysis on Utterances | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/electronics12234793 | - |
| dc.identifier.scopusid | 2-s2.0-85179327882 | - |
| dc.identifier.wosid | 001116054700001 | - |
| dc.identifier.bibliographicCitation | Electronics, v.12, no.23, pp 1 - 15 | - |
| dc.citation.title | Electronics | - |
| dc.citation.volume | 12 | - |
| dc.citation.number | 23 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 15 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Physics | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
| dc.subject.keywordAuthor | 3D human action analysis | - |
| dc.subject.keywordAuthor | human activity understanding | - |
| dc.subject.keywordAuthor | motion capture | - |
| dc.subject.keywordAuthor | multi-modal dataset | - |
| dc.subject.keywordAuthor | utterance dataset | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
