Detailed Information

Cited 0 time in webofscience Cited 2 time in scopus
Metadata Downloads

DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis

Full metadata record
DC Field Value Language
dc.contributor.authorPark, Jiho-
dc.contributor.authorKim, Junghye-
dc.contributor.authorGil, Yujung-
dc.contributor.authorKim, Dongho-
dc.date.accessioned2024-09-26T21:01:37Z-
dc.date.available2024-09-26T21:01:37Z-
dc.date.issued2024-01-
dc.identifier.issn2169-3536-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/26284-
dc.description.abstractThe importance of a high-quality dataset availability in 3D human action analysis research cannot be overstated. This paper introduces DGU-HAO (Human Action analysis dataset with daily life Objects). This novel 3D human action multi-modality dataset encompasses four distinct data modalities accompanied by annotation data, including motion capture, RGB video, image, and 3D object modeling data. It features 63 action classes involving interactions with 60 common furniture and electronic devices. Each action class comprises approximately 1,000 motion capture data representing 3D skeleton data and corresponding RGB video and 3D object modeling data, resulting in 67,505 motion capture data samples. It offers comprehensive 3D structural information of the human, RGB images and videos, and point cloud data for 60 objects, collected through the participation of 126 subjects to ensure inclusivity and account for diverse human body types. To validate our dataset, we leveraged MMNet, a 3D human action recognition model, achieving Top-1 accuracy of 91.51% and 92.29% using the skeleton joint and bone methods, respectively. Beyond human action recognition, our versatile dataset is valuable for various 3D human action analysis research endeavors.-
dc.format.extent11-
dc.language영어-
dc.language.isoENG-
dc.publisherIEEE-
dc.titleDGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis-
dc.typeArticle-
dc.publisher.location미국-
dc.identifier.doi10.1109/ACCESS.2024.3351888-
dc.identifier.scopusid2-s2.0-85182381719-
dc.identifier.wosid001145666600001-
dc.identifier.bibliographicCitationIEEE Access, v.12, pp 8780 - 8790-
dc.citation.titleIEEE Access-
dc.citation.volume12-
dc.citation.startPage8780-
dc.citation.endPage8790-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaTelecommunications-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryTelecommunications-
dc.subject.keywordAuthor3D human action analysis-
dc.subject.keywordAuthorhuman action recognition-
dc.subject.keywordAuthorhuman activity understanding-
dc.subject.keywordAuthormotion capture-
dc.subject.keywordAuthormulti-modal dataset-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Dong Ho photo

Kim, Dong Ho
Software Education Institute
Read more

Altmetrics

Total Views & Downloads

BROWSE