Cited 0 time in
L2 영어 교과서를 ‘학습’한 L2-신경망 언어 모델의 문법 일반화 양상
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | 구건우 | - |
| dc.contributor.author | 박명관 | - |
| dc.date.accessioned | 2023-04-27T12:40:54Z | - |
| dc.date.available | 2023-04-27T12:40:54Z | - |
| dc.date.issued | 2022-03 | - |
| dc.identifier.issn | 1226-3206 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/3495 | - |
| dc.description.abstract | Recent studies employing state-of-the-art neural network language models (NLMs) have reported their human-like performances in ‘understanding’ various linguistic phenomena particularly through the Benchmark of Linguistic Minimal Pairs (BLiMP), which is a challenge test dataset of sentences to be used for evaluating the linguistic knowledge of NLMs on major grammatical phenomena in English (Warstadt et al., 2020). Adopting the methodology at hand, this paper aims to assess the level of linguistic knowledge acquired by L2-NLMs trained on English textbooks (alias the K-English datasets) published in Korea and compare it with the corresponding different levels in English native speakers and L1-NLMs. Assuming that an NLM is also a language learner, we used the BLiMP to evaluate the grammaticality rating performances of L2-NLMs based on Generation Pre-trained Transformer-2 (GPT-2) and Long Short-Term Memory (LSTM). In conclusion, this study demonstrates that the L2-NLMs have attained a substantially lower level of grammatical generalization than L1 counterparts as well as English native speakers. The results imply that the K-English training datasets are not robust enough for L2 NLMs to make substantial grammatical generalizations. | - |
| dc.format.extent | 17 | - |
| dc.language | 한국어 | - |
| dc.language.iso | KOR | - |
| dc.publisher | 현대문법학회 | - |
| dc.title | L2 영어 교과서를 ‘학습’한 L2-신경망 언어 모델의 문법 일반화 양상 | - |
| dc.title.alternative | Grammatical Generalizations in Neural Language Models Trained on L2 Textbooks | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.doi | 10.14342/smog.2022.113.121 | - |
| dc.identifier.bibliographicCitation | 현대문법연구, no.113, pp 121 - 137 | - |
| dc.citation.title | 현대문법연구 | - |
| dc.citation.number | 113 | - |
| dc.citation.startPage | 121 | - |
| dc.citation.endPage | 137 | - |
| dc.identifier.kciid | ART002828883 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.subject.keywordAuthor | 언어학적 일반화 | - |
| dc.subject.keywordAuthor | 신경망 언어 모델 | - |
| dc.subject.keywordAuthor | LSTM | - |
| dc.subject.keywordAuthor | GPT-2 | - |
| dc.subject.keywordAuthor | L2-신경망 언어 모델 | - |
| dc.subject.keywordAuthor | linguistic generalization | - |
| dc.subject.keywordAuthor | neural language model | - |
| dc.subject.keywordAuthor | LSTM | - |
| dc.subject.keywordAuthor | GPT-2 | - |
| dc.subject.keywordAuthor | L2-language models | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
