Cited 117 time in
Finger-Vein Recognition Based on Deep DenseNet Using Composite image
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Song, Jong Min | - |
| dc.contributor.author | Kim, Wan | - |
| dc.contributor.author | Park, Kang Ryoung | - |
| dc.date.accessioned | 2023-04-28T05:42:43Z | - |
| dc.date.available | 2023-04-28T05:42:43Z | - |
| dc.date.issued | 2019 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/8656 | - |
| dc.description.abstract | Finger-vein recognition has the advantages of high immutability, as finger veins are located under the skin, high user convenience, as a non-invasive and contactless capture device, is used, and high readability even when one of the fingers is damaged or not available for recognition. However, there is an issue of recognition performance degradation caused by finger positional variation, misalignment, and shading from uneven illumination. The existing hand-crafted feature-based methods have exhibited varied performance depending on how these issues were handled by pre-processing. To overcome this shortcoming of hand-crafted feature-based methods, convolutional neural network (CNN)-based recognition methods have been researched. The existing systems based on a CNN use two methods: using a difference image as the input to the network and calculating the distance between feature vectors extracted from the CNN. Difference images can be susceptible to noise as they are generated by differences in pixel values. Also, the method for calculating the distance between feature vectors cannot employ all layers of the trained network and has less accuracy than the method employing difference images. To address these issues, this paper examined a method less susceptible to noise and which uses the entire network; a composite image of two finger-vein images was used as the input to a deep, densely-connected convolutional network (DenseNet). Two open databases, namely Shandong University homologous multi-modal traits (SDUMLA-HMT) finger-vein database and The Hong Kong Polytechnic University finger image database (version 1), were used for experiments and the results show that the proposed method has greater performance than the existing methods. | - |
| dc.format.extent | 19 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
| dc.title | Finger-Vein Recognition Based on Deep DenseNet Using Composite image | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2019.2918503 | - |
| dc.identifier.scopusid | 2-s2.0-85067170057 | - |
| dc.identifier.wosid | 000471590700001 | - |
| dc.identifier.bibliographicCitation | IEEE ACCESS, v.7, pp 66845 - 66863 | - |
| dc.citation.title | IEEE ACCESS | - |
| dc.citation.volume | 7 | - |
| dc.citation.startPage | 66845 | - |
| dc.citation.endPage | 66863 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordPlus | FEATURE-EXTRACTION | - |
| dc.subject.keywordPlus | NETWORK | - |
| dc.subject.keywordAuthor | Finger-vein recognition | - |
| dc.subject.keywordAuthor | composite image | - |
| dc.subject.keywordAuthor | deep DenseNet | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
