Detailed Information

Cited 3 time in webofscience Cited 3 time in scopus
Metadata Downloads

Latency-Free Driving Scene Prediction for On-Road Teledriving With Future-Image-Generation

Authors
Lee, Kang-WonKo, Dae-KwanKim, Yong-JunRyu, Jee-HwanLim, Soo-Chul
Issue Date
Nov-2024
Publisher
IEEE
Keywords
Generators; Generative adversarial networks; Vehicles; Streaming media; Optical flow; 5G mobile communication; Delays; Teleoperated driving; future video prediction; GAN; autonomous vehicles; remote driving; teleoperation
Citation
IEEE Transactions on Intelligent Transportation Systems, v.25, no.11, pp 16676 - 16686
Pages
11
Indexed
SCIE
SCOPUS
Journal Title
IEEE Transactions on Intelligent Transportation Systems
Volume
25
Number
11
Start Page
16676
End Page
16686
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/22935
DOI
10.1109/TITS.2024.3435481
ISSN
1524-9050
1558-0016
Abstract
Teledriving could serve as a practical solution for handling unforeseen situations in autonomous driving. However, the latency of transmission networks remains a prominent concern. Despite advancements like 5G networks, delays in remote driving scenes cannot be entirely eradicated, potentially leading to unwanted incidents. While a few attempts have been made to address this issue by predicting the future driving scenes, these efforts have been restricted in their ability to accurately foresee clear and relevant driving scenarios. This study presents a method to predict a latency-free future driving scene. Unlike prior approaches, our method incorporates the command signal of a remote driver into the prediction network, as well as the past driving video frames and vehicle status. As a result, we can accurately predict relevant and clear latency-free future driving scenes. A combination of convolutional long short-term memory (ConvLSTM) and generative adversarial networks (GAN) was utilized in a deep neural network to predict the future driving scenes based on latency. The dataset was gathered from on-road teledriving experiments, with a maximum vehicle speed of 53 km/h and a driving route length of approximately 1.3 km. The dataset used to train the deep neural network was gathered from on-road teledriving experiments. The proposed method can estimate the future driving scene for up to 0.5 s, surpassing the performance of both baseline video prediction methods and a method that does not utilize the input command of the driver.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > Department of Mechanical, Robotics and Energy Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lim, Soo Chul photo

Lim, Soo Chul
College of Engineering (Department of Mechanical, Robotics and Energy Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE