Cited 0 time in
Can an L2-neural LM Generalize Filler-gap Dependency?
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | 최선주 | - |
| dc.contributor.author | 윤영도 | - |
| dc.contributor.author | 박명관 | - |
| dc.date.accessioned | 2023-04-27T08:40:49Z | - |
| dc.date.available | 2023-04-27T08:40:49Z | - |
| dc.date.issued | 2022-11 | - |
| dc.identifier.issn | 1225-4770 | - |
| dc.identifier.issn | 2671-6151 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/2250 | - |
| dc.description.abstract | Recent studies have shown that recurrent neural language models (LMs) can understand sentences involving filler-gap dependency (Chowdhury & Zamparelli, 2018; Wilcox et al., 2018, 2019). However, their behavior does not encode the underlying constraints that govern filler-gap acceptability. In this vein, significant issues remain about the extent to which LMs acquire specific linguistic constructions and whether these models recognize an abstract property of syntax in their representations. In this paper, following the lead of Bhattacharya and van Schijndel (2020), we further test whether the L2 neural LM can learn abstract syntactic constraints that have been claimed to govern the behavior of filler-gap constructions. To see this, we implement the L2 neural LM trained on the L2 corpus of English textbooks published in Korea for the last two decades, and then we test the representational overlap between disparate filler-gap constructions based on the syntactic priming paradigm. Unlike the previous studies of L1-neural LMs, we could not find sufficient evidence showing that the L2 neural LM learns a general representation of the existence of filler-gap dependency and the shared underlying constraints. | - |
| dc.format.extent | 15 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | 한국현대언어학회 | - |
| dc.title | Can an L2-neural LM Generalize Filler-gap Dependency? | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.doi | 10.18627/jslg.38.3.202211.323 | - |
| dc.identifier.bibliographicCitation | 언어연구, v.38, no.3, pp 323 - 337 | - |
| dc.citation.title | 언어연구 | - |
| dc.citation.volume | 38 | - |
| dc.citation.number | 3 | - |
| dc.citation.startPage | 323 | - |
| dc.citation.endPage | 337 | - |
| dc.identifier.kciid | ART002898441 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.subject.keywordAuthor | filler-gap dependency | - |
| dc.subject.keywordAuthor | neural language model | - |
| dc.subject.keywordAuthor | syntactic priming | - |
| dc.subject.keywordAuthor | adaptation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
