Cited 3 time in
Perspective Transformer and MobileNets-Based 3D Lane Detection from Single 2D Image
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Li, Mengyu | - |
| dc.contributor.author | Chu, Phuong Minh | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.date.accessioned | 2023-04-27T09:40:31Z | - |
| dc.date.available | 2023-04-27T09:40:31Z | - |
| dc.date.issued | 2022-10 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/2475 | - |
| dc.description.abstract | Three-dimensional (3D) lane detection is widely used in image understanding, image analysis, 3D scene reconstruction, and autonomous driving. Recently, various methods for 3D lane detection from single two-dimensional (2D) images have been proposed to address inaccurate lane layouts in scenarios (e.g., uphill, downhill, and bumps). Many previous studies struggled in solving complex cases involving realistic datasets. In addition, these methods have low accuracy and high computational resource requirements. To solve these problems, we put forward a high-quality method to predict 3D lanes from a single 2D image captured by conventional cameras, which is also cost effective. The proposed method comprises the following three stages. First, a MobileNet model that requires low computational resources was employed to generate multiscale front-view features from a single RGB image. Then, a perspective transformer calculated bird's eye view (BEV) features from the front-view features. Finally, two convolutional neural networks were used for predicting the 2D and 3D coordinates and respective lane types. The results of the high-reliability experiments verified that our method achieves fast convergence and provides high-quality 3D lanes from single 2D images. Moreover, the proposed method requires no exceptional computational resources, thereby reducing its implementation costs. | - |
| dc.format.extent | 14 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Perspective Transformer and MobileNets-Based 3D Lane Detection from Single 2D Image | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/math10193697 | - |
| dc.identifier.scopusid | 2-s2.0-85139913061 | - |
| dc.identifier.wosid | 000867037100001 | - |
| dc.identifier.bibliographicCitation | Mathematics, v.10, no.19, pp 1 - 14 | - |
| dc.citation.title | Mathematics | - |
| dc.citation.volume | 10 | - |
| dc.citation.number | 19 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 14 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Mathematics | - |
| dc.relation.journalWebOfScienceCategory | Mathematics | - |
| dc.subject.keywordAuthor | 3D lane detection software from 2D image | - |
| dc.subject.keywordAuthor | tool for autonomous driving | - |
| dc.subject.keywordAuthor | 3D scene reconstruction software | - |
| dc.subject.keywordAuthor | deep learning software | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
