Cited 39 time in
Facial Action Units-Based Image Retrieval for Facial Expression Recognition
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Trinh Thi Doan Pham | - |
| dc.contributor.author | Kim, Sesong | - |
| dc.contributor.author | Lu, Yucheng | - |
| dc.contributor.author | Jung, Seung-Won | - |
| dc.contributor.author | Won, Chee-Sun | - |
| dc.date.accessioned | 2023-04-28T05:42:25Z | - |
| dc.date.available | 2023-04-28T05:42:25Z | - |
| dc.date.issued | 2019 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/8621 | - |
| dc.description.abstract | Facial expression recognition (FER) is a very challenging problem in computer vision. Although extensive research has been conducted to improve FER performance in recent years, there is still room for improvement. A common goal of FER is to classify a given face image into one of seven emotion categories: angry, disgust, fear, happy, neutral, sad, and surprise. In this paper, we propose to use a simple multi-layer perceptron (MLP) classifier that determines whether the current classification result is reliable or not. If the current classification result is determined as unreliable, we use the given face image as a query to search for similar images. In particular, facial action units are used to retrieve the images with a similar facial expression. Then, another MLP is trained to predict final emotion category by aggregating classification output vectors of the query image and its retrieved similar images. Experimental results on FER2013 dataset demonstrate that the performance of the state-of-the-art networks can be further improved by our proposed method. | - |
| dc.format.extent | 8 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
| dc.title | Facial Action Units-Based Image Retrieval for Facial Expression Recognition | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2018.2889852 | - |
| dc.identifier.scopusid | 2-s2.0-85060533421 | - |
| dc.identifier.wosid | 000456359800001 | - |
| dc.identifier.bibliographicCitation | IEEE ACCESS, v.7, pp 5200 - 5207 | - |
| dc.citation.title | IEEE ACCESS | - |
| dc.citation.volume | 7 | - |
| dc.citation.startPage | 5200 | - |
| dc.citation.endPage | 5207 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordAuthor | Convolutional neural networks | - |
| dc.subject.keywordAuthor | facial expression recognition | - |
| dc.subject.keywordAuthor | image retrieval | - |
| dc.subject.keywordAuthor | facial action units | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
