Cited 9 time in
Traffic Accident Detection Using Background Subtraction and CNN Encoder-Transformer Decoder in Video Frames
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Zhang, Yihang | - |
| dc.contributor.author | Sung, Yunsick | - |
| dc.date.accessioned | 2024-08-08T08:30:52Z | - |
| dc.date.available | 2024-08-08T08:30:52Z | - |
| dc.date.issued | 2023-07 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/20438 | - |
| dc.description.abstract | Artificial intelligence plays a significant role in traffic-accident detection. Traffic accidents involve a cascade of inadvertent events, making traditional detection approaches challenging. For instance, Convolutional Neural Network (CNN)-based approaches cannot analyze temporal relationships among objects, and Recurrent Neural Network (RNN)-based approaches suffer from low processing speeds and cannot detect traffic accidents simultaneously across multiple frames. Furthermore, these networks dismiss background interference in input video frames. This paper proposes a framework that begins by subtracting the background based on You Only Look Once (YOLOv5), which adaptively reduces background interference when detecting objects. Subsequently, the CNN encoder and Transformer decoder are combined into an end-to-end model to extract the spatial and temporal features between different time points, allowing for a parallel analysis between input video frames. The proposed framework was evaluated on the Car Crash Dataset through a series of comparison and ablation experiments. Our framework was benchmarked against three accident-detection models to evaluate its effectiveness, and the proposed framework demonstrated a superior accuracy of approximately 96%. The results of the ablation experiments indicate that when background subtraction was not incorporated into the proposed framework, the values of all evaluation indicators decreased by approximately 3%. | - |
| dc.format.extent | 15 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Traffic Accident Detection Using Background Subtraction and CNN Encoder-Transformer Decoder in Video Frames | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/math11132884 | - |
| dc.identifier.scopusid | 2-s2.0-85164728871 | - |
| dc.identifier.wosid | 001030986800001 | - |
| dc.identifier.bibliographicCitation | Mathematics, v.11, no.13, pp 1 - 15 | - |
| dc.citation.title | Mathematics | - |
| dc.citation.volume | 11 | - |
| dc.citation.number | 13 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 15 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Mathematics | - |
| dc.relation.journalWebOfScienceCategory | Mathematics | - |
| dc.subject.keywordAuthor | artificial intelligence | - |
| dc.subject.keywordAuthor | deep learning | - |
| dc.subject.keywordAuthor | traffic-accident detection | - |
| dc.subject.keywordAuthor | background subtraction | - |
| dc.subject.keywordAuthor | CNN encoder | - |
| dc.subject.keywordAuthor | Transformer decoder | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
