Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Advanced Facial Analysis in Multi-Modal Data with Cascaded Cross-Attention based Transformer

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Jun-Hwa-
dc.contributor.authorKim, Namho-
dc.contributor.authorHong, Minsoo-
dc.contributor.authorWon, Chee Sun-
dc.date.accessioned2024-11-11T08:00:09Z-
dc.date.available2024-11-11T08:00:09Z-
dc.date.issued2024-09-
dc.identifier.issn2160-7508-
dc.identifier.issn2160-7516-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/56181-
dc.description.abstractOne of the most crucial elements in deeply understanding humans on a psychological level is manifested through facial expressions. The analysis of human behavior can be informed by their facial expressions, making it essential to employ indicators such as expression (EXPR), valence-arousal (VA), and action units (AU). In this paper, we introduce the method proposed in the Challenge of the 6th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW) at CVPR 2024. Our proposed method utilizes the multi-modal Aff-Wild2 dataset, which is split into visual and audio modalities. For the visual data, we extract features using the SimMIM model that was pre-trained on a diverse set of facial expression data. For the audio data, we extract features using the Wav2Vec model. Then, to fuse the extracted visual and audio features, we proposed a cascaded cross-attention mechanism in a transformer. Our approach achieved average F1 scores of 0.4652 and 0.3005 on the AU and the EXPR tracks, respectively, and an average Concordance Correlation Coefficient (CCC) of 0.5077, outperforming the baseline performance on all tracks of the ABAW6 competition. Our approach placed 5th, 6th, and 7th on the AU, the EXPR, and the VA tracks, respectively. The code used in the 6th ABAW competition is available at https://github.com/namho-96/ABAW2024. © 2024 IEEE.-
dc.format.extent8-
dc.language영어-
dc.language.isoENG-
dc.publisherIEEE-
dc.titleAdvanced Facial Analysis in Multi-Modal Data with Cascaded Cross-Attention based Transformer-
dc.typeArticle-
dc.publisher.location미국-
dc.identifier.doi10.1109/CVPRW63382.2024.00784-
dc.identifier.scopusid2-s2.0-85206483361-
dc.identifier.wosid001327781708005-
dc.identifier.bibliographicCitation2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 7870 - 7877-
dc.citation.title2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)-
dc.citation.startPage7870-
dc.citation.endPage7877-
dc.type.docTypeProceedings Paper-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryComputer Science, Interdisciplinary Applications-
dc.subject.keywordAuthorABAW-
dc.subject.keywordAuthorCross-attention-
dc.subject.keywordAuthorFacial Analysis-
dc.subject.keywordAuthorTransformer-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > Department of Electronics and Electrical Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE