Cited 12 time in
2D&3DHNet for 3D Object Classification in LiDAR Point Cloud
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Song, Wei | - |
| dc.contributor.author | Li, Dechao | - |
| dc.contributor.author | Sun, Su | - |
| dc.contributor.author | Zhang, Lingfeng | - |
| dc.contributor.author | Xin, Yu | - |
| dc.contributor.author | Sung, Yunsick | - |
| dc.contributor.author | Choi, Ryong | - |
| dc.date.accessioned | 2023-04-27T10:40:50Z | - |
| dc.date.available | 2023-04-27T10:40:50Z | - |
| dc.date.issued | 2022-07 | - |
| dc.identifier.issn | 2072-4292 | - |
| dc.identifier.issn | 2072-4292 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/2890 | - |
| dc.description.abstract | Accurate semantic analysis of LiDAR point clouds enables the interaction between intelligent vehicles and the real environment. This paper proposes a hybrid 2D and 3D Hough Net by combining 3D global Hough features and 2D local Hough features with a classification deep learning network. Firstly, the 3D object point clouds are mapped into the 3D Hough space to extract the global Hough features. The generated global Hough features are input into the 3D convolutional neural network for training global features. Furthermore, a multi-scale critical point sampling method is designed to extract critical points in the 2D views projected from the point clouds to reduce the computation of redundant points. To extract local features, a grid-based dynamic nearest neighbors algorithm is designed by searching the neighbors of the critical points. Finally, the two networks are connected to the full connection layer, which is input into fully connected layers for object classification. | - |
| dc.format.extent | 17 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | 2D&3DHNet for 3D Object Classification in LiDAR Point Cloud | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/rs14133146 | - |
| dc.identifier.scopusid | 2-s2.0-85133506687 | - |
| dc.identifier.wosid | 000824268600001 | - |
| dc.identifier.bibliographicCitation | Remote Sensing, v.14, no.13, pp 1 - 17 | - |
| dc.citation.title | Remote Sensing | - |
| dc.citation.volume | 14 | - |
| dc.citation.number | 13 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 17 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Environmental Sciences & Ecology | - |
| dc.relation.journalResearchArea | Geology | - |
| dc.relation.journalResearchArea | Remote Sensing | - |
| dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
| dc.relation.journalWebOfScienceCategory | Environmental Sciences | - |
| dc.relation.journalWebOfScienceCategory | Geosciences, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Remote Sensing | - |
| dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
| dc.subject.keywordPlus | RECONSTRUCTION | - |
| dc.subject.keywordPlus | NETWORK | - |
| dc.subject.keywordPlus | 3-D | - |
| dc.subject.keywordAuthor | 3D object classification | - |
| dc.subject.keywordAuthor | deep neural network | - |
| dc.subject.keywordAuthor | Hough space | - |
| dc.subject.keywordAuthor | LiDAR | - |
| dc.subject.keywordAuthor | intelligent vehicle | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
