Layer-wise Semantic Role Labeling with the KR-BERT Language Model
- Authors
- 서혜진; 김유희; 박명관
- Issue Date
- Sep-2022
- Publisher
- 한국언어학회
- Keywords
- semantic role labeling; Korean neural language model; performance assessment; layer-wise analysis; heatmap analysis
- Citation
- 언어, v.47, no.3, pp 445 - 466
- Pages
- 22
- Indexed
- KCI
- Journal Title
- 언어
- Volume
- 47
- Number
- 3
- Start Page
- 445
- End Page
- 466
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/2598
- DOI
- 10.18855/lisoko.2022.47.3.003
- ISSN
- 1229-4039
2734-0481
- Abstract
- The purpose of this study is to assess the performance of semantic role labeling (SRL) predicted by the neural language models (NLMs, or Transformer-based pre-trained models) of Korean. First, the study built two models: the KR-BERT-BiLSTM-CRF model and the KR-BERT-Verb Position Feature (VPF)-BiLSTM-CRF model. The results from testing these two models show that the KR-BERT-VPF-BiLSTM-CRF model (67.3%) outperformed the KR-BERT-BiLSTM-CRF model (66.4%). In addition, this study examined which hidden layer improved the performance of NLMs during training. As expected, the NLM that was trained on the last hidden layer performed better than other alternative options such as the second-to-last-hidden layer and the concatenated last four layers. Thus, this study renders support to the general observation that an NLM should be trained on the last hidden layer to reach the highest performance. This study is meaningful since it is the first attempt to investigate which hidden layer is useful to train NLMs in SRL tasks of Korean.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Humanities > Division of English Language & Literature > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.