Cited 12 time in
Multiple Event-Based Simulation Scenario Generation Approach for Autonomous Vehicle Smart Sensors and Devices
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Park, Jisun | - |
| dc.contributor.author | Wen, Mingyun | - |
| dc.contributor.author | Sung, Yunsick | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.date.accessioned | 2023-04-28T02:40:40Z | - |
| dc.date.available | 2023-04-28T02:40:40Z | - |
| dc.date.issued | 2019-10-02 | - |
| dc.identifier.issn | 1424-8220 | - |
| dc.identifier.issn | 1424-3210 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/7530 | - |
| dc.description.abstract | Nowadays, deep learning methods based on a virtual environment are widely applied to research and technology development for autonomous vehicle's smart sensors and devices. Learning various driving environments in advance is important to handle unexpected situations that can exist in the real world and to continue driving without accident. For training smart sensors and devices of an autonomous vehicle well, a virtual simulator should create scenarios of various possible real-world situations. To create reality-based scenarios, data on the real environment must be collected from a real driving vehicle or a scenario analysis process conducted by experts. However, these two approaches increase the period and the cost of scenario generation as more scenarios are created. This paper proposes a scenario generation method based on deep learning to create scenarios automatically for training autonomous vehicle smart sensors and devices. To generate various scenarios, the proposed method extracts multiple events from a video which is taken on a real road by using deep learning and generates the multiple event in a virtual simulator. First, Faster-region based convolution neural network (Faster-RCNN) extracts bounding boxes of each object in a driving video. Second, the high-level event bounding boxes are calculated. Third, long-term recurrent convolution networks (LRCN) classify each type of extracted event. Finally, all multiple event classification results are combined into one scenario. The generated scenarios can be used in an autonomous driving simulator to teach multiple events that occur during real-world driving. To verify the performance of the proposed scenario generation method, experiments using real driving video data and a virtual simulator were conducted. The results for deep learning model show an accuracy of 95.6%; furthermore, multiple high-level events were extracted, and various scenarios were generated in a virtual simulator for smart sensors and devices of an autonomous vehicle. | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Multiple Event-Based Simulation Scenario Generation Approach for Autonomous Vehicle Smart Sensors and Devices | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/s19204456 | - |
| dc.identifier.scopusid | 2-s2.0-85073463188 | - |
| dc.identifier.wosid | 000497864700104 | - |
| dc.identifier.bibliographicCitation | SENSORS, v.19, no.20 | - |
| dc.citation.title | SENSORS | - |
| dc.citation.volume | 19 | - |
| dc.citation.number | 20 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Chemistry | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Instruments & Instrumentation | - |
| dc.relation.journalWebOfScienceCategory | Chemistry, Analytical | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Instruments & Instrumentation | - |
| dc.subject.keywordAuthor | scenario generation | - |
| dc.subject.keywordAuthor | autonomous vehicle | - |
| dc.subject.keywordAuthor | smart sensor and device | - |
| dc.subject.keywordAuthor | deep learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
