Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Good-enough but more error-prone: Garden-path processing in GPT models

Full metadata record
DC Field Value Language
dc.contributor.authorJonghyun Lee-
dc.contributor.authorJeong-Ah Shin-
dc.date.accessioned2026-01-30T09:00:17Z-
dc.date.available2026-01-30T09:00:17Z-
dc.date.issued2025-12-
dc.identifier.issn1229-1374-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/63551-
dc.description.abstractThis research explores the syntactic processing of Large Language Models (LLMs), specifically GPT-3.5 and GPT-4, by comparing them to human processors, focusing on garden-path sentences. These structures are challenging for even proficient human processors, often causing misinterpretations that persist despite reanalysis, revealing the ‘good-enough’ nature of human syntactic processing. This study aims to determine if LLMs exhibit a similar ‘good-enough’ syntactic processing as humans and whether more advanced models exhibit a more human-like processing. In a series of experiments, we examined how models handle garden-path sentences such as “While the man hunted the deer ran into the woods,” through a comprehension questions task. A key focus was whether misinterpretations in the target phrases (“hunted the deer”) erroneously affected the global interpretation of the sentence. Results showed that LLMs display patterns similar to humans, including lingering misinterpretations and the ability to utilize linguistic cues such as plausibility, phrase length, and verb type. This suggests that LLMs mimic human ‘good-enough’ syntactic processing through probabilistic next-word prediction, including making human-like errors. However, LLMs also showed vulnerability to garden-path structures, showing a higher rate of errors compared to humans, likely due to inherent features of their processing mechanisms.-
dc.format.extent41-
dc.language영어-
dc.language.isoENG-
dc.publisher경희대학교 언어정보연구소-
dc.titleGood-enough but more error-prone: Garden-path processing in GPT models-
dc.typeArticle-
dc.publisher.location대한민국-
dc.identifier.doi10.17250/khisli.42.3.202512.003-
dc.identifier.scopusid2-s2.0-105028481071-
dc.identifier.wosid001654965400003-
dc.identifier.bibliographicCitation언어연구, v.42, no.3, pp 539 - 579-
dc.citation.title언어연구-
dc.citation.volume42-
dc.citation.number3-
dc.citation.startPage539-
dc.citation.endPage579-
dc.type.docTypeArticle-
dc.identifier.kciidART003281221-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.description.journalRegisteredClassesci-
dc.description.journalRegisteredClasskci-
dc.relation.journalResearchAreaLinguistics-
dc.relation.journalWebOfScienceCategoryLanguage & Linguistics-
dc.subject.keywordAuthorChatGPT-
dc.subject.keywordAuthorartificial intelligence-
dc.subject.keywordAuthorlarge language models-
dc.subject.keywordAuthorsyntactic ambiguity-
dc.subject.keywordAuthorgood-enough processing-
dc.subject.keywordAuthorgarden-path sentences-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Humanities > Division of English Language & Literature > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Shin, Jeong Ah photo

Shin, Jeong Ah
College of Humanities (Division of English Language and Literature)
Read more

Altmetrics

Total Views & Downloads

BROWSE