Cited 1 time in
Real-time 3D scene modeling using dynamic billboard for remote robot control systems
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Phuong Minh Chu | - |
| dc.contributor.author | Cho, Seoungjae | - |
| dc.contributor.author | Hieu Trong Nguyen | - |
| dc.contributor.author | Sim, Sungdae | - |
| dc.contributor.author | Kwak, Kiho | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.date.accessioned | 2024-09-26T13:30:44Z | - |
| dc.date.available | 2024-09-26T13:30:44Z | - |
| dc.date.issued | 2017-12-07 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/25181 | - |
| dc.description.abstract | In this paper, a method for modeling three-dimensional scenes from a Lidar point cloud as well as a billboard calibration approach for remote mobile robot control applications are presented as a combined two-step approach. First, by projecting a local three-dimensional point cloud on two-dimensional coordinate system, we obtain a list of colored points. Based on this list, we apply a proposed ground segmentation algorithm to separate ground and non-ground areas. With the ground part, a dynamic triangular mesh is created by means of a height map and the vehicle position. The non-ground part is divided into small groups. Then, a local voxel map is applied for modeling each group. As a result, all the inner surfaces are eliminated. Second, for billboard calibration, we implement three stages in each frame. In the first stage, at the billboard location, an average ground point is estimated. In the second stage, the distortion angle is calculated. The billboard is updated for each frame in the final stage and corresponds to the terrain gradient. | - |
| dc.format.extent | 5 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | Real-time 3D scene modeling using dynamic billboard for remote robot control systems | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/MFI.2017.8170454 | - |
| dc.identifier.scopusid | 2-s2.0-85042368554 | - |
| dc.identifier.wosid | 000426937700056 | - |
| dc.identifier.bibliographicCitation | 2017 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), v.2017-November, pp 354 - 358 | - |
| dc.citation.title | 2017 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI) | - |
| dc.citation.volume | 2017-November | - |
| dc.citation.startPage | 354 | - |
| dc.citation.endPage | 358 | - |
| dc.type.docType | Proceedings Paper | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
