SAPBERT: Speaker-Aware Pretrained BERT for Emotion Recognition in Conversationopen access
- Authors
- Lim, Seunguook; Kim, Jihie
- Issue Date
- Jan-2023
- Publisher
- MDPI
- Keywords
- natural language processing; motion recognition in conversation; dialogue modeling; pre-training; hierarchical BERT
- Citation
- Algorithms, v.16, no.1, pp 1 - 16
- Pages
- 16
- Indexed
- SCOPUS
ESCI
- Journal Title
- Algorithms
- Volume
- 16
- Number
- 1
- Start Page
- 1
- End Page
- 16
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/19194
- DOI
- 10.3390/a16010008
- ISSN
- 1999-4893
1999-4893
- Abstract
- Emotion recognition in conversation (ERC) is receiving more and more attention, as interactions between humans and machines increase in a variety of services such as chat-bot and virtual assistants. As emotional expressions within a conversation can heavily depend on the contextual information of the participating speakers, it is important to capture self-dependency and inter-speaker dynamics. In this study, we propose a new pre-trained model, SAPBERT, that learns to identify speakers in a conversation to capture the speaker-dependent contexts and address the ERC task. SAPBERT is pre-trained with three training objectives including Speaker Classification (SC), Masked Utterance Regression (MUR), and Last Utterance Generation (LUG). We investigate whether our pre-trained speaker-aware model can be leveraged for capturing speaker-dependent contexts for ERC tasks. Experiments show that our proposed approach outperforms baseline models through demonstrating the effectiveness and validity of our method.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.