Cited 0 time in
ROM-Pose: restoring occluded mask image for 2D human pose estimation
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Yunju | - |
| dc.contributor.author | Kim, Jihie | - |
| dc.date.accessioned | 2025-06-12T05:41:50Z | - |
| dc.date.available | 2025-06-12T05:41:50Z | - |
| dc.date.issued | 2025-05 | - |
| dc.identifier.issn | 2376-5992 | - |
| dc.identifier.issn | 2376-5992 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/58430 | - |
| dc.description.abstract | Human pose estimation (HPE) is a field focused on estimating human poses by detecting key points in images. HPE includes methods like top-down and bottom-up approaches. The top-down approach uses a two-stage process, first locating and then detecting key points on humans with bounding boxes, whereas the bottom-up approach directly detects individual key points and integrates them to estimate the overall pose. In this article, we address the problem of bounding box detection inaccuracies in certain situations using the top-down method. The detected bounding boxes, which serve as input for the model, impact the accuracy of pose estimation. Occlusions occur when a part of the target's body is obscured by a person or object and hinder the model's ability to detect complete bounding boxes. Consequently, the model produces bounding boxes that do not recognize occluded parts, resulting in their exclusion from the input used by the HPE model. To mitigate this issue, we introduce the Restoring Occluded Mask Image for 2D Human Pose Estimation (ROM-Pose), comprising a restoration model and an HPE model. The restoration model is designed to delineate the boundary between the target's grayscale mask (occluded image) and the blocker's grayscale mask (occludee image) using the specially created Whole Common Objects in Context (COCO) dataset. Upon identifying the boundary, the restoration model restores the occluded image. This restored image is subsequently overlaid onto the RGB image for use in the HPE model. By integrating occluded parts' information into the input, the bounding box includes these areas during detection, thus enhancing the HPE model's ability to recognize them. ROM-Pose achieved a 1.6% improvement in average precision (AP) compared to the baseline. | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | PEERJ INC | - |
| dc.title | ROM-Pose: restoring occluded mask image for 2D human pose estimation | - |
| dc.type | Article | - |
| dc.publisher.location | 영국 | - |
| dc.identifier.doi | 10.7717/peerj-cs.2843 | - |
| dc.identifier.scopusid | 2-s2.0-105005184174 | - |
| dc.identifier.wosid | 001488650500001 | - |
| dc.identifier.bibliographicCitation | PeerJ Computer Science, v.11 | - |
| dc.citation.title | PeerJ Computer Science | - |
| dc.citation.volume | 11 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
| dc.subject.keywordAuthor | Human pose estimation | - |
| dc.subject.keywordAuthor | Estimation | - |
| dc.subject.keywordAuthor | Segmentation | - |
| dc.subject.keywordAuthor | Restoration | - |
| dc.subject.keywordAuthor | Amodal instance segmentation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
