Cited 0 time in
Good-enough but more error-prone: Garden-path processing in GPT models
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Jonghyun Lee | - |
| dc.contributor.author | Jeong-Ah Shin | - |
| dc.date.accessioned | 2026-01-30T09:00:17Z | - |
| dc.date.available | 2026-01-30T09:00:17Z | - |
| dc.date.issued | 2025-12 | - |
| dc.identifier.issn | 1229-1374 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/63551 | - |
| dc.description.abstract | This research explores the syntactic processing of Large Language Models (LLMs), specifically GPT-3.5 and GPT-4, by comparing them to human processors, focusing on garden-path sentences. These structures are challenging for even proficient human processors, often causing misinterpretations that persist despite reanalysis, revealing the ‘good-enough’ nature of human syntactic processing. This study aims to determine if LLMs exhibit a similar ‘good-enough’ syntactic processing as humans and whether more advanced models exhibit a more human-like processing. In a series of experiments, we examined how models handle garden-path sentences such as “While the man hunted the deer ran into the woods,” through a comprehension questions task. A key focus was whether misinterpretations in the target phrases (“hunted the deer”) erroneously affected the global interpretation of the sentence. Results showed that LLMs display patterns similar to humans, including lingering misinterpretations and the ability to utilize linguistic cues such as plausibility, phrase length, and verb type. This suggests that LLMs mimic human ‘good-enough’ syntactic processing through probabilistic next-word prediction, including making human-like errors. However, LLMs also showed vulnerability to garden-path structures, showing a higher rate of errors compared to humans, likely due to inherent features of their processing mechanisms. | - |
| dc.format.extent | 41 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | 경희대학교 언어정보연구소 | - |
| dc.title | Good-enough but more error-prone: Garden-path processing in GPT models | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.doi | 10.17250/khisli.42.3.202512.003 | - |
| dc.identifier.scopusid | 2-s2.0-105028481071 | - |
| dc.identifier.wosid | 001654965400003 | - |
| dc.identifier.bibliographicCitation | 언어연구, v.42, no.3, pp 539 - 579 | - |
| dc.citation.title | 언어연구 | - |
| dc.citation.volume | 42 | - |
| dc.citation.number | 3 | - |
| dc.citation.startPage | 539 | - |
| dc.citation.endPage | 579 | - |
| dc.type.docType | Article | - |
| dc.identifier.kciid | ART003281221 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.description.journalRegisteredClass | esci | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.relation.journalResearchArea | Linguistics | - |
| dc.relation.journalWebOfScienceCategory | Language & Linguistics | - |
| dc.subject.keywordAuthor | ChatGPT | - |
| dc.subject.keywordAuthor | artificial intelligence | - |
| dc.subject.keywordAuthor | large language models | - |
| dc.subject.keywordAuthor | syntactic ambiguity | - |
| dc.subject.keywordAuthor | good-enough processing | - |
| dc.subject.keywordAuthor | garden-path sentences | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
