Cited 17 time in
Facial Action Units for Training Convolutional Neural Networks
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Trinh Thi Doan Pham | - |
| dc.contributor.author | Won, Chee Sun | - |
| dc.date.accessioned | 2023-04-28T05:42:22Z | - |
| dc.date.available | 2023-04-28T05:42:22Z | - |
| dc.date.issued | 2019 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/8611 | - |
| dc.description.abstract | This paper deals with the problem of training convolutional neural networks (CNNs) with facial action units (AUs). In particular, we focus on the imbalance problem of the training datasets for facial emotion classification. Since training a CNN with an imbalanced dataset tends to yield a learning bias toward the major classes and eventually leads to deterioration in the classification accuracy, it is required to increase the number of training images for the minority classes to have evenly distributed training images over all classes. However, it is difficult to find the images with a similar facial emotion for the oversampling. In this paper, we propose to use the AU features to retrieve an image with a similar emotion. The query selection from the minority class and the AU-based retrieval processes repeat until the numbers of training data over all classes are balanced. Also, to improve the classification accuracy, the AU features are fused with the CNN features to train a support vector machine (SVM) for final classification. The experiments have been conducted on three imbalanced facial image datasets, RAF-DB, FER2013, and ExpW. The results demonstrate that the CNNs trained with the AU features improve the classification accuracy by 3%-4%. | - |
| dc.format.extent | 9 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
| dc.title | Facial Action Units for Training Convolutional Neural Networks | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2019.2921241 | - |
| dc.identifier.scopusid | 2-s2.0-85068350994 | - |
| dc.identifier.wosid | 000473769900001 | - |
| dc.identifier.bibliographicCitation | IEEE ACCESS, v.7, pp 77816 - 77824 | - |
| dc.citation.title | IEEE ACCESS | - |
| dc.citation.volume | 7 | - |
| dc.citation.startPage | 77816 | - |
| dc.citation.endPage | 77824 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordPlus | CLASS IMBALANCE | - |
| dc.subject.keywordPlus | EXPRESSIONS | - |
| dc.subject.keywordAuthor | Convolutional neural network | - |
| dc.subject.keywordAuthor | facial emotion recognition | - |
| dc.subject.keywordAuthor | data oversampling | - |
| dc.subject.keywordAuthor | facial action units | - |
| dc.subject.keywordAuthor | data imbalance | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
