Cited 2 time in
(AL)BERT Down the Garden Path: Psycholinguistic Experiments for Pre-trained Language Models
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Jonghyun | - |
| dc.contributor.author | Shin, Jeong-Ah | - |
| dc.contributor.author | Park, Myung-Kwan | - |
| dc.date.accessioned | 2023-04-27T13:41:11Z | - |
| dc.date.available | 2023-04-27T13:41:11Z | - |
| dc.date.issued | 2022-09 | - |
| dc.identifier.issn | 1598-1398 | - |
| dc.identifier.issn | 2586-7474 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/3826 | - |
| dc.description.abstract | This study compared the syntactic capabilities of several neural language models (LMs) including Transformers (BERT / ALBERT) and LSTM and investigated whether they exhibit human-like syntactic representations through a targeted evaluation approach, a method to evaluate the syntactic processing ability of LMs using sentences designed for psycholinguistic experiments. By employing garden-path structures with several linguistic manipulations, whether LMs detect temporary ungrammaticality and use a linguistic cue such as plausibility, transitivity, and morphology is assessed. The results showed that both Transformers and LSTM exploited several linguistic cues for incremental syntactic processing, comparable to human syntactic processing. They differed, however, in terms of whether and how they use each linguistic cue. Overall, Transformers had a more human-like syntactic representation than LSTM, given their higher sensitivity to plausibility and ability to retain information from previous words. Meanwhile, the number of parameters does not seem to undermine the performance of LMs, contrary to what was predicted in previous studies. Through these findings, this research sought to contribute to a greater understanding of the syntactic processing of neural language models as well as human language processing. © 2022 KASELL All rights reserved. | - |
| dc.format.extent | 18 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | 한국영어학회 | - |
| dc.title | (AL)BERT Down the Garden Path: Psycholinguistic Experiments for Pre-trained Language Models | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.doi | 10.15738/kjell.22..202210.1033 | - |
| dc.identifier.scopusid | 2-s2.0-85139740453 | - |
| dc.identifier.bibliographicCitation | 영어학, v.22, pp 1033 - 1050 | - |
| dc.citation.title | 영어학 | - |
| dc.citation.volume | 22 | - |
| dc.citation.startPage | 1033 | - |
| dc.citation.endPage | 1050 | - |
| dc.type.docType | Article | - |
| dc.identifier.kciid | ART002883032 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.subject.keywordAuthor | garden-path structure | - |
| dc.subject.keywordAuthor | natural language processing | - |
| dc.subject.keywordAuthor | psycholinguistics | - |
| dc.subject.keywordAuthor | targeted evaluation approach | - |
| dc.subject.keywordAuthor | transformers | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
