Cited 8 time in
Parametric Shape Estimation of Human Body Under Wide Clothing
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lu, Yucheng | - |
| dc.contributor.author | Cha, Jin-Hyuck | - |
| dc.contributor.author | Youm, Se-Kyoung | - |
| dc.contributor.author | Jung, Seung-Won | - |
| dc.date.accessioned | 2024-08-08T07:02:14Z | - |
| dc.date.available | 2024-08-08T07:02:14Z | - |
| dc.date.issued | 2021 | - |
| dc.identifier.issn | 1520-9210 | - |
| dc.identifier.issn | 1941-0077 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/19451 | - |
| dc.description.abstract | The shape of the human body plays an important role in many applications, such as those involving personal healthcare and virtual clothing try-ons. However, accurate body shape measurements typically require the user to be wearing a minimal amount of clothing, which is not practical in many situations. To resolve this issue using deep learning techniques, we need a paired dataset of ground-truth naked human body shapes and their corresponding color images with clothes. As it is practically impossible to collect enough of this kind of data from real-world environments to train a deep neural network, in this paper, we present the Synthetic dataset of Human Avatars under wiDE gaRment (SHADER). The SHADER dataset consists of 300,000 paired ground-truth naked and dressed images of 1,500 synthetic humans with different body shapes, poses, garments, skin tones, and backgrounds. To take full advantage of SHADER, we propose a novel silhouette confidence measure and show that our silhouette confidence prediction network can help improve the performance of state-of-the-art shape estimation networks for human bodies under clothing. The experimental results demonstrate the effectiveness of the proposed approach. The code and dataset are available at https://github.com/YCL92/SHADER. | - |
| dc.format.extent | 13 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
| dc.title | Parametric Shape Estimation of Human Body Under Wide Clothing | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/TMM.2020.3029941 | - |
| dc.identifier.scopusid | 2-s2.0-85118188258 | - |
| dc.identifier.wosid | 000709093100018 | - |
| dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON MULTIMEDIA, v.23, pp 3657 - 3669 | - |
| dc.citation.title | IEEE TRANSACTIONS ON MULTIMEDIA | - |
| dc.citation.volume | 23 | - |
| dc.citation.startPage | 3657 | - |
| dc.citation.endPage | 3669 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordPlus | HIP RATIO | - |
| dc.subject.keywordPlus | POSE | - |
| dc.subject.keywordAuthor | Shape | - |
| dc.subject.keywordAuthor | Clothing | - |
| dc.subject.keywordAuthor | Three-dimensional displays | - |
| dc.subject.keywordAuthor | Two dimensional displays | - |
| dc.subject.keywordAuthor | Biological system modeling | - |
| dc.subject.keywordAuthor | Pose estimation | - |
| dc.subject.keywordAuthor | Silhouette confidence | - |
| dc.subject.keywordAuthor | convolutional neural network | - |
| dc.subject.keywordAuthor | human shape estimation | - |
| dc.subject.keywordAuthor | synthetic dataset | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
