Cited 2 time in
DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Park, Jiho | - |
| dc.contributor.author | Kim, Junghye | - |
| dc.contributor.author | Gil, Yujung | - |
| dc.contributor.author | Kim, Dongho | - |
| dc.date.accessioned | 2024-09-26T21:01:37Z | - |
| dc.date.available | 2024-09-26T21:01:37Z | - |
| dc.date.issued | 2024-01 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/26284 | - |
| dc.description.abstract | The importance of a high-quality dataset availability in 3D human action analysis research cannot be overstated. This paper introduces DGU-HAO (Human Action analysis dataset with daily life Objects). This novel 3D human action multi-modality dataset encompasses four distinct data modalities accompanied by annotation data, including motion capture, RGB video, image, and 3D object modeling data. It features 63 action classes involving interactions with 60 common furniture and electronic devices. Each action class comprises approximately 1,000 motion capture data representing 3D skeleton data and corresponding RGB video and 3D object modeling data, resulting in 67,505 motion capture data samples. It offers comprehensive 3D structural information of the human, RGB images and videos, and point cloud data for 60 objects, collected through the participation of 126 subjects to ensure inclusivity and account for diverse human body types. To validate our dataset, we leveraged MMNet, a 3D human action recognition model, achieving Top-1 accuracy of 91.51% and 92.29% using the skeleton joint and bone methods, respectively. Beyond human action recognition, our versatile dataset is valuable for various 3D human action analysis research endeavors. | - |
| dc.format.extent | 11 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2024.3351888 | - |
| dc.identifier.scopusid | 2-s2.0-85182381719 | - |
| dc.identifier.wosid | 001145666600001 | - |
| dc.identifier.bibliographicCitation | IEEE Access, v.12, pp 8780 - 8790 | - |
| dc.citation.title | IEEE Access | - |
| dc.citation.volume | 12 | - |
| dc.citation.startPage | 8780 | - |
| dc.citation.endPage | 8790 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordAuthor | 3D human action analysis | - |
| dc.subject.keywordAuthor | human action recognition | - |
| dc.subject.keywordAuthor | human activity understanding | - |
| dc.subject.keywordAuthor | motion capture | - |
| dc.subject.keywordAuthor | multi-modal dataset | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
