Cited 25 time in
Classifying 3D objects in LiDAR point clouds with a back-propagation neural network
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Song, Wei | - |
| dc.contributor.author | Zou, Shuanghui | - |
| dc.contributor.author | Tian, Yifei | - |
| dc.contributor.author | Fong, Simon | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.date.accessioned | 2023-04-28T07:40:39Z | - |
| dc.date.available | 2023-04-28T07:40:39Z | - |
| dc.date.issued | 2018-10-12 | - |
| dc.identifier.issn | 2192-1962 | - |
| dc.identifier.issn | 2192-1962 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/8987 | - |
| dc.description.abstract | Due to object recognition accuracy limitations, unmanned ground vehicles (UGVs) must perceive their environments for local path planning and object avoidance. To gather high-precision information about the UGV's surroundings, Light Detection and Ranging (LiDAR) is frequently used to collect large-scale point clouds. However, the complex spatial features of these clouds, such as being unstructured, diffuse, and disordered, make it difficult to segment and recognize individual objects. This paper therefore develops an object feature extraction and classification system that uses LiDAR point clouds to classify 3D objects in urban environments. After eliminating the ground points via a height threshold method, this describes the 3D objects in terms of their geometrical features, namely their volume, density, and eigenvalues. A back-propagation neural network (BPNN) model is trained (over the course of many iterations) to use these extracted features to classify objects into five types. During the training period, the parameters in each layer of the BPNN model are continually changed and modified via back-propagation using a non-linear sigmoid function. In the system, the object segmentation process supports obstacle detection for autonomous driving, and the object recognition method provides an environment perception function for terrain modeling. Our experimental results indicate that the object recognition accuracy achieve 91.5% in outdoor environment. | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | SPRINGEROPEN | - |
| dc.title | Classifying 3D objects in LiDAR point clouds with a back-propagation neural network | - |
| dc.type | Article | - |
| dc.publisher.location | 영국 | - |
| dc.identifier.doi | 10.1186/s13673-018-0152-7 | - |
| dc.identifier.scopusid | 2-s2.0-85054606266 | - |
| dc.identifier.wosid | 000447286200001 | - |
| dc.identifier.bibliographicCitation | HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, v.8, no.1 | - |
| dc.citation.title | HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES | - |
| dc.citation.volume | 8 | - |
| dc.citation.number | 1 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.subject.keywordPlus | OBSTACLE DETECTION | - |
| dc.subject.keywordPlus | REGISTRATION | - |
| dc.subject.keywordPlus | EXTRACTION | - |
| dc.subject.keywordPlus | CAMERA | - |
| dc.subject.keywordPlus | RECOGNITION | - |
| dc.subject.keywordPlus | ALGORITHM | - |
| dc.subject.keywordAuthor | 3D object recognition | - |
| dc.subject.keywordAuthor | Back-propagation neural network | - |
| dc.subject.keywordAuthor | Feature extraction | - |
| dc.subject.keywordAuthor | LiDAR point cloud | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
