Detailed Information

Cited 0 time in webofscience Cited 1 time in scopus
Metadata Downloads

DGU-HAU: A Dataset for 3D Human Action Analysis on Utterances

Full metadata record
DC Field Value Language
dc.contributor.authorPark, Jiho-
dc.contributor.authorPark, Kwangryeol-
dc.contributor.authorKim, Dongho-
dc.date.accessioned2024-09-26T16:00:49Z-
dc.date.available2024-09-26T16:00:49Z-
dc.date.issued2023-12-
dc.identifier.issn2079-9292-
dc.identifier.issn2079-9292-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/25729-
dc.description.abstractConstructing diverse and complex multi-modal datasets is crucial for advancing human action analysis research, providing ground truth annotations for training deep learning networks, and enabling the development of robust models across real-world scenarios. Generating natural and contextually appropriate nonverbal gestures is essential for enhancing immersive and effective human-computer interactions in various applications. These applications include video games, embodied virtual assistants, and conversations within a metaverse. However, existing speech-related human datasets are focused on style transfer, so they have limitations that make them unsuitable for 3D human action analysis studies, such as human action recognition and generation. Therefore, we introduce a novel multi-modal dataset, DGU-HAU, a dataset for 3D human action on utterances that commonly occurs during daily life. We validate the dataset using a human action generation model, Action2Motion (A2M), a state-of-the-art 3D human action generation model.-
dc.format.extent15-
dc.language영어-
dc.language.isoENG-
dc.publisherMDPI-
dc.titleDGU-HAU: A Dataset for 3D Human Action Analysis on Utterances-
dc.typeArticle-
dc.publisher.location스위스-
dc.identifier.doi10.3390/electronics12234793-
dc.identifier.scopusid2-s2.0-85179327882-
dc.identifier.wosid001116054700001-
dc.identifier.bibliographicCitationElectronics, v.12, no.23, pp 1 - 15-
dc.citation.titleElectronics-
dc.citation.volume12-
dc.citation.number23-
dc.citation.startPage1-
dc.citation.endPage15-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaPhysics-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryPhysics, Applied-
dc.subject.keywordAuthor3D human action analysis-
dc.subject.keywordAuthorhuman activity understanding-
dc.subject.keywordAuthormotion capture-
dc.subject.keywordAuthormulti-modal dataset-
dc.subject.keywordAuthorutterance dataset-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Dong Ho photo

Kim, Dong Ho
Software Education Institute
Read more

Altmetrics

Total Views & Downloads

BROWSE