Cited 12 time in
Depth completion for kinect v2 sensor
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Song, Wanbin | - |
| dc.contributor.author | Anh Vu Le | - |
| dc.contributor.author | Yun, Seokmin | - |
| dc.contributor.author | Jung, Seung-Won | - |
| dc.contributor.author | Won, Chee Sun | - |
| dc.date.accessioned | 2024-09-25T02:30:59Z | - |
| dc.date.available | 2024-09-25T02:30:59Z | - |
| dc.date.issued | 2017-02 | - |
| dc.identifier.issn | 1380-7501 | - |
| dc.identifier.issn | 1573-7721 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/23294 | - |
| dc.description.abstract | Kinect v2 adopts a time-of-flight (ToF) depth sensing mechanism, which causes different type of depth artifacts comparing to the original Kinect v1. The goal of this paper is to propose a depth completion method, which is designed especially for the Kinect v2 depth artifacts. Observing the specific types of depth errors in the Kinect v2 such as thin hole-lines along the object boundaries and the new type of holes in the image corners, in this paper, we exploit the position information of the color edges extracted from the Kinect v2 sensor to guide the accurate hole-filling around the object boundaries. Since our approach requires a precise registration between color and depth images, we also introduce the transformation matrix which yields point-to-point correspondence with a pixel-accuracy. Experimental results demonstrate the effectiveness of the proposed depth image completion algorithm for the Kinect v2 in terms of completion accuracy and execution time. | - |
| dc.format.extent | 24 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | SPRINGER | - |
| dc.title | Depth completion for kinect v2 sensor | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1007/s11042-016-3523-y | - |
| dc.identifier.scopusid | 2-s2.0-84963647210 | - |
| dc.identifier.wosid | 000396051200057 | - |
| dc.identifier.bibliographicCitation | MULTIMEDIA TOOLS AND APPLICATIONS, v.76, no.3, pp 4357 - 4380 | - |
| dc.citation.title | MULTIMEDIA TOOLS AND APPLICATIONS | - |
| dc.citation.volume | 76 | - |
| dc.citation.number | 3 | - |
| dc.citation.startPage | 4357 | - |
| dc.citation.endPage | 4380 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.subject.keywordAuthor | Kinect v2 | - |
| dc.subject.keywordAuthor | Hole-filling | - |
| dc.subject.keywordAuthor | Depth completion | - |
| dc.subject.keywordAuthor | Depth and color fusion | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
