Cited 6 time in
Point2Lane: Polyline-Based Reconstruction With Principal Points for Lane Detection
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Chae, Yeon Jeong | - |
| dc.contributor.author | Park, So Jeong | - |
| dc.contributor.author | Kang, Eun Su | - |
| dc.contributor.author | Chae, Moon Ju | - |
| dc.contributor.author | Ngo, Ba Hung | - |
| dc.contributor.author | Cho, Sung In | - |
| dc.date.accessioned | 2024-08-08T10:00:49Z | - |
| dc.date.available | 2024-08-08T10:00:49Z | - |
| dc.date.issued | 2023-12 | - |
| dc.identifier.issn | 1524-9050 | - |
| dc.identifier.issn | 1558-0016 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/21094 | - |
| dc.description.abstract | In this work, we observed that a nonlinear line could be expressed with a set of linear lines. We propose a novel lane detection method with polyline-based reconstruction based on this hypothesis. We define the optimal principal points with a new metric, the principal score, to generate the polyline. According to the principal score, we select principal points having a high influence on lane reconstruction and simply reproduce the target lane by connecting them. Additionally, conventional methods predict a fixed number of parameters to express each lane. However, this can limit an ability to represent a lane curvature and cause inaccurate detection results. Therefore, we set the number of principal points to be dynamically changed depending on the lane curvature to solve this problem. This allows the model to make flexible detection results reflecting the characteristics of each lane. We also propose a training strategy with a new piece-wise linear equation-based loss function. With this strategy, the model is fine-tuned to predict the principal points representing the curved parts of the lane well. Last, we propose a spatial context-aware feature flip fusion module to exploit the symmetric property of road images. This module helps the model selectively utilize the spatial context in the flipped feature map based on the lane density. We effectively reduce the adverse effects, especially the false positives of the existing feature flip fusion module misaligned on asymmetrical images. The experiments show that the proposed method provides competitive lane detection results compared to state-of-the-art methods. | - |
| dc.format.extent | 17 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | Point2Lane: Polyline-Based Reconstruction With Principal Points for Lane Detection | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/TITS.2023.3295807 | - |
| dc.identifier.scopusid | 2-s2.0-85166330614 | - |
| dc.identifier.wosid | 001040567100001 | - |
| dc.identifier.bibliographicCitation | IEEE Transactions on Intelligent Transportation Systems, v.24, no.12, pp 14813 - 14829 | - |
| dc.citation.title | IEEE Transactions on Intelligent Transportation Systems | - |
| dc.citation.volume | 24 | - |
| dc.citation.number | 12 | - |
| dc.citation.startPage | 14813 | - |
| dc.citation.endPage | 14829 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Transportation | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Civil | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Transportation Science & Technology | - |
| dc.subject.keywordAuthor | Autonomous driving | - |
| dc.subject.keywordAuthor | deep learning | - |
| dc.subject.keywordAuthor | lane detection | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
