Cited 13 time in
A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Song, Wei | - |
| dc.contributor.author | Zou, Shuanghui | - |
| dc.contributor.author | Tian, Yifei | - |
| dc.contributor.author | Sun, Su | - |
| dc.contributor.author | Fong, Simon | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.contributor.author | Qiu, Lvyang | - |
| dc.date.accessioned | 2023-04-28T06:41:33Z | - |
| dc.date.available | 2023-04-28T06:41:33Z | - |
| dc.date.issued | 2018-12 | - |
| dc.identifier.issn | 1976-913X | - |
| dc.identifier.issn | 2092-805X | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/8853 | - |
| dc.description.abstract | Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel. | - |
| dc.format.extent | 12 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | 한국정보처리학회 | - |
| dc.title | A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.doi | 10.3745/JIPS.02.0099 | - |
| dc.identifier.scopusid | 2-s2.0-85059381868 | - |
| dc.identifier.bibliographicCitation | JIPS(Journal of Information Processing Systems), v.14, no.6, pp 1445 - 1456 | - |
| dc.citation.title | JIPS(Journal of Information Processing Systems) | - |
| dc.citation.volume | 14 | - |
| dc.citation.number | 6 | - |
| dc.citation.startPage | 1445 | - |
| dc.citation.endPage | 1456 | - |
| dc.type.docType | Article | - |
| dc.identifier.kciid | ART002428244 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.description.journalRegisteredClass | esci | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.subject.keywordPlus | RECOGNITION | - |
| dc.subject.keywordAuthor | Driving Awareness | - |
| dc.subject.keywordAuthor | Environment Perception | - |
| dc.subject.keywordAuthor | Unmanned Ground Vehicle | - |
| dc.subject.keywordAuthor | 3D Reconstruction | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
