Cited 3 time in
OADE-Net: Original and Attention-Guided DenseNet-Based Ensemble Network for Person Re-Identification Using Infrared Light Images
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Jeong, Min Su | - |
| dc.contributor.author | Jeong, Seong In | - |
| dc.contributor.author | Kang, Seon Jong | - |
| dc.contributor.author | Ryu, Kyung Bong | - |
| dc.contributor.author | Park, Kang Ryoung | - |
| dc.date.accessioned | 2023-04-27T09:40:36Z | - |
| dc.date.available | 2023-04-27T09:40:36Z | - |
| dc.date.issued | 2022-10 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/2507 | - |
| dc.description.abstract | Recently, research on the methods that use images captured during day and night times has been actively conducted in the field of person re-identification (ReID). In particular, ReID has been increasingly performed using infrared (IR) images captured at night and red-green-blue (RGB) images, in addition to ReID, which only uses RGB images captured during the daytime. However, insufficient research has been conducted on ReID that only uses IR images because their color and texture information cannot be identified easily. This study thus proposes an original and attention-guided DenseNet-based ensemble network (OADE-Net)-a ReID model that can recognize pedestrians using only IR images captured during the day and night times. The OADE-Net consists of the original and attention-guided DenseNets and a shallow convolutional neural network for the ensemble network (SCE-Net), which is a model used for combining the two models. Owing to the lack of existing open datasets that only consist of IR images, the experiments are conducted by creating a new dataset that only consists of IR images retrieved from two open databases (DBPerson-Recog-DB1 and SYSU-MM01). The experimental results of the OADE-Net showed that the achieved ReID accuracy of the DBPerson-Recog-DB1 is 79.71% in rank 1, while the mean average precision (mAP) is 78.17%. Furthermore, an accuracy of 57.30% is achieved in rank 1 in the SYSU-MM01 case, whereas the accuracy of the mAP was 41.50%. Furthermore, the accuracy of the OADE-Net in both datasets is higher than that of the existing score-level fusion and state-of-the-art methods. | - |
| dc.format.extent | 26 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | OADE-Net: Original and Attention-Guided DenseNet-Based Ensemble Network for Person Re-Identification Using Infrared Light Images | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/math10193503 | - |
| dc.identifier.scopusid | 2-s2.0-85139768554 | - |
| dc.identifier.wosid | 000868140700001 | - |
| dc.identifier.bibliographicCitation | Mathematics, v.10, no.19, pp 1 - 26 | - |
| dc.citation.title | Mathematics | - |
| dc.citation.volume | 10 | - |
| dc.citation.number | 19 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 26 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Mathematics | - |
| dc.relation.journalWebOfScienceCategory | Mathematics | - |
| dc.subject.keywordAuthor | person re-identification | - |
| dc.subject.keywordAuthor | infrared image | - |
| dc.subject.keywordAuthor | original and attention-guided DenseNet-based ensemble network | - |
| dc.subject.keywordAuthor | shallow convolutional neural network | - |
| dc.subject.keywordAuthor | ensemble network | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
