Cited 0 time in
부정극어 허가 처리 평가: 영어 교과서를 학습한 L2 신경망 언어 모델을 활용하여
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | 구건우 | - |
| dc.contributor.author | 이재민 | - |
| dc.contributor.author | 박명관 | - |
| dc.date.accessioned | 2023-04-27T10:40:46Z | - |
| dc.date.available | 2023-04-27T10:40:46Z | - |
| dc.date.issued | 2022-07 | - |
| dc.identifier.issn | 1598-1886 | - |
| dc.identifier.issn | 2713-6817 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/2872 | - |
| dc.description.abstract | The GRNN (Gulordava et al. 2018) neural language model (NLM) is viewed as a language learner in that it is trained with sentences and, like humans, ‘predicts’ the next-word given a sequence of words. Recent studies employing NLMs have reported their human-like performances in ‘understanding’ various linguistic phenomena. Building on previous studies, this paper aims to assess the level of linguistic knowledge that an NLM can acquire from a collection of English textbooks published in Korea for last two decades. We applied a psycholinguistic experimental method to compare the L2-GRNN to the L1-GRNN, focusing on the learning/processing of negative polarity item (NPI)-licensing in English. The L1-GRNN LM that was trained with the dataset from Wikipedia was reported to attain the linguistic knowledge that native speakers of English have. The L2-GRNN LM was trained with learning materials for Korean English learners. The result of analyzing the data extracted from the NLMs in NPI processing showed that the overall performance of the L2-GRNN NLM was far behind that of the L1-GRNN LM. In conclusion, this study demonstrates that the L2-GRNN has attained a far lower level of grammatical knowledge in NPI processing than its L1 counterpart. This result implies that the L2 dataset representing English textbooks published in Korea is not enough for the L2-GRNN NLM to acquire substantial grammatical knowledge that the L1-GRNN NLM as well as native speakers has. | - |
| dc.format.extent | 23 | - |
| dc.language | 한국어 | - |
| dc.language.iso | KOR | - |
| dc.publisher | 서강대학교 언어정보연구소 | - |
| dc.title | 부정극어 허가 처리 평가: 영어 교과서를 학습한 L2 신경망 언어 모델을 활용하여 | - |
| dc.title.alternative | An assessment of processing negative polarity items by an L2 neural language model trained on English textbooks | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.doi | 10.29211/soli.2022.46..004 | - |
| dc.identifier.bibliographicCitation | 언어와 정보 사회, v.46, pp 103 - 125 | - |
| dc.citation.title | 언어와 정보 사회 | - |
| dc.citation.volume | 46 | - |
| dc.citation.startPage | 103 | - |
| dc.citation.endPage | 125 | - |
| dc.identifier.kciid | ART002864758 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.subject.keywordAuthor | neural language model | - |
| dc.subject.keywordAuthor | NPI licensing | - |
| dc.subject.keywordAuthor | training data | - |
| dc.subject.keywordAuthor | L2 textbook assessment | - |
| dc.subject.keywordAuthor | 신경망 언어 모델 | - |
| dc.subject.keywordAuthor | 부정극어 허가 | - |
| dc.subject.keywordAuthor | 훈련 데이터 | - |
| dc.subject.keywordAuthor | L2 교재 평가 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
