Cited 32 time in
Driver Gaze Detection Based on Deep Residual Networks Using the Combined Single Image of Dual Near-Infrared Cameras
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Yoon, Hyo Sik | - |
| dc.contributor.author | Baek, Na Rae | - |
| dc.contributor.author | Truong, Noi Quang | - |
| dc.contributor.author | Park, Kang Ryoung | - |
| dc.date.accessioned | 2023-04-28T05:42:45Z | - |
| dc.date.available | 2023-04-28T05:42:45Z | - |
| dc.date.issued | 2019 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/8663 | - |
| dc.description.abstract | Research into the prevention of driver inattention by detecting the driver gaze has become more vital, as traffic accidents due to driver inattention have increased. In a vehicle environment, the conventional gaze-detection methods include detecting the driver gaze using single or multiple cameras. When a single camera is used to detect the driver gaze, excessive rotation of the driver's head may prevent the eye region from being accurately detected, thereby reducing the gaze-detection accuracy. To address this issue, researchers previously attempted gaze detection using dual cameras. However, these methods selectively use the information obtained from each camera; thus, accuracy improvement is limited because the information is not simultaneously used. In addition, the processing complexity increases when images obtained from dual cameras are simultaneously processed. Accordingly, this paper proposes a method to detect the driver's gaze position in the vehicle. This is the first study to calculate the driver gaze via a deep convolutional neural network (CNN) that simultaneously uses image information acquired from the dual near-infrared light cameras. Previous research selectively used one of the images acquired from the dual cameras, and the existing CNN-based gaze-detection methods use multiple deep CNNs for the driver eyes and facial images. However, the proposed method uses one CNN model that integrates all information acquired from the dual cameras into one three-channel image and uses it as an input for the network, thereby increasing the recognition reliability and reducing the computational cost. We conducted experiments based on a self-built driver database that comprised the images from 26 participants (Dongguk dual-camera-based gaze database) and the Columbia gaze dataset, which is an open database. The results demonstrate that the proposed method shows better performance than the existing methods. | - |
| dc.format.extent | 14 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
| dc.title | Driver Gaze Detection Based on Deep Residual Networks Using the Combined Single Image of Dual Near-Infrared Cameras | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2019.2928339 | - |
| dc.identifier.scopusid | 2-s2.0-85073890155 | - |
| dc.identifier.wosid | 000477866400017 | - |
| dc.identifier.bibliographicCitation | IEEE ACCESS, v.7, pp 93448 - 93461 | - |
| dc.citation.title | IEEE ACCESS | - |
| dc.citation.volume | 7 | - |
| dc.citation.startPage | 93448 | - |
| dc.citation.endPage | 93461 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordPlus | SYSTEM | - |
| dc.subject.keywordPlus | ATTENTION | - |
| dc.subject.keywordPlus | TRACKING | - |
| dc.subject.keywordAuthor | Driver gaze detection | - |
| dc.subject.keywordAuthor | dual NIR cameras | - |
| dc.subject.keywordAuthor | deep residual network | - |
| dc.subject.keywordAuthor | combined single image of three channels | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
