Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

How are Korean Neural Language Models ‘surprised’ Layerwisely?

Full metadata record
DC Field Value Language
dc.contributor.author최선주-
dc.contributor.author박명관-
dc.contributor.author김유희-
dc.date.accessioned2023-04-27T15:40:23Z-
dc.date.available2023-04-27T15:40:23Z-
dc.date.issued2021-11-
dc.identifier.issn1225-2522-
dc.identifier.issn2508-4267-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/4213-
dc.description.abstractSince the introduction of BERT, recent works have shown success in detecting when a word is anomalous given sentence context. Since likelihood score is not an appropriate tool in identifying the exact property of linguistic anomaly, Li et al. (2021) recently adopt Gaussian models for density estimation at intermediate layers of pretrained language models. They find that different English pretrained language models employ separate mechanisms to recognize different types of linguistic anomaly. In keeping with Li et al.‘s methodology, we probe whether Korean counterparts such as KoBERT and KR-BERT are sensitive to different levels of linguistic anomaly, just as English-based language models are. To investigate the issue concerned, we construct an experiment with a suite of test data involving morphosyntactic, semantic, and commonsense anomaly in Korean and apply the two Korean-based models to test relevant sentences. We find that KoBERT and KR-BERT show relatively higher surprisal gaps throughout layers when the anomaly is morphosyntactic than when the anomaly is semantic. By contrast, commonsense anomaly does not exhibit any surprisal gap in any layer. We thus report that, like their English counterparts, KoBERT and KR-BERT use different mechanisms to track the different types of linguistic anomaly.-
dc.format.extent17-
dc.language영어-
dc.language.isoENG-
dc.publisher한국언어과학회-
dc.titleHow are Korean Neural Language Models ‘surprised’ Layerwisely?-
dc.title.alternativeHow are Korean Neural Language Models ‘surprised’ Layerwisely?-
dc.typeArticle-
dc.publisher.location대한민국-
dc.identifier.doi10.14384/kals.2021.28.4.301-
dc.identifier.bibliographicCitation언어과학, v.28, no.4, pp 301 - 317-
dc.citation.title언어과학-
dc.citation.volume28-
dc.citation.number4-
dc.citation.startPage301-
dc.citation.endPage317-
dc.identifier.kciidART002777986-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClasskci-
dc.subject.keywordAuthorKR-BERT-
dc.subject.keywordAuthorKoBERT-
dc.subject.keywordAuthorlinguistic anomaly-
dc.subject.keywordAuthorsurprisal gap-
dc.subject.keywordAuthorlayerwise-
dc.subject.keywordAuthor한국어 신경망 언어모델-
dc.subject.keywordAuthor언어학적 변칙-
dc.subject.keywordAuthor‘놀라움’ 차이-
dc.subject.keywordAuthor신경망 층별 분석-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Humanities > Division of English Language & Literature > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Myung Kwan photo

Park, Myung Kwan
College of Humanities (Division of English Language and Literature)
Read more

Altmetrics

Total Views & Downloads

BROWSE