Cited 64 time in
Pedestrian detection based on faster R-CNN in nighttime by fusing deep convolutional features of successive images
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Jong Hyun | - |
| dc.contributor.author | Batchuluun, Ganbayar | - |
| dc.contributor.author | Park, Kang Ryoung | - |
| dc.date.accessioned | 2024-08-08T03:31:05Z | - |
| dc.date.available | 2024-08-08T03:31:05Z | - |
| dc.date.issued | 2018-12-30 | - |
| dc.identifier.issn | 0957-4174 | - |
| dc.identifier.issn | 1873-6793 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/17099 | - |
| dc.description.abstract | Existing studies using visible-light cameras have mainly focused on methods of pedestrian detection during daytime. However, these studies found it difficult to detect pedestrians during nighttime with low external light. The NIR illuminator has limitations in terms of illumination angle and distance, and the illuminator's power needs to be adjusted depending on whether an object is near or distant. Although, thermal cameras were used for nighttime pedestrian detection, thermal cameras are currently expensive and thus difficult to install in many places. To solve these problems, attempts have been made to use visible-light cameras for nighttime pedestrian detection. However, most of these attempts considered an indoor environment where the distance to the object was short. This study proposes a method of pedestrian detection at nighttime using a visible-light camera and faster region-based convolutional neural network (R-CNN). In addition, as pedestrians cannot be reliably detected from a single nighttime image, we combined deep convolutional features in successive frames. Using Korea advanced institute of science and technology (KAIST) open database, we conducted experiments and observed that the proposed method performed better than the baseline methods at all times (day and night). In addition, through the experiments with national ICT Australia Ltd. (NICTA) open database, we confirm that the proposed method is effective for pedestrian detection at all times. Finally, we present theoretical grounds for the proposed fusion. (C) 2018 Elsevier Ltd. All rights reserved. | - |
| dc.format.extent | 19 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | PERGAMON-ELSEVIER SCIENCE LTD | - |
| dc.title | Pedestrian detection based on faster R-CNN in nighttime by fusing deep convolutional features of successive images | - |
| dc.type | Article | - |
| dc.publisher.location | 영국 | - |
| dc.identifier.doi | 10.1016/j.eswa.2018.07.020 | - |
| dc.identifier.scopusid | 2-s2.0-85050198155 | - |
| dc.identifier.wosid | 000446949300002 | - |
| dc.identifier.bibliographicCitation | EXPERT SYSTEMS WITH APPLICATIONS, v.114, pp 15 - 33 | - |
| dc.citation.title | EXPERT SYSTEMS WITH APPLICATIONS | - |
| dc.citation.volume | 114 | - |
| dc.citation.startPage | 15 | - |
| dc.citation.endPage | 33 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Operations Research & Management Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Operations Research & Management Science | - |
| dc.subject.keywordPlus | ENHANCEMENT | - |
| dc.subject.keywordPlus | TRACKING | - |
| dc.subject.keywordAuthor | Pedestrian detection | - |
| dc.subject.keywordAuthor | Faster R-CNN | - |
| dc.subject.keywordAuthor | Nighttime image | - |
| dc.subject.keywordAuthor | Fusion of deep convolutional features | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
