Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Probing Good-Enough Processing in Large Language Models with a Paraphrasing Task

Full metadata record
DC Field Value Language
dc.contributor.authorJonghyun Lee-
dc.contributor.authorJeong-Ah Shin-
dc.date.accessioned2026-02-27T18:00:33Z-
dc.date.available2026-02-27T18:00:33Z-
dc.date.issued2026-01-
dc.identifier.issn1598-1398-
dc.identifier.issn2586-7474-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/63821-
dc.description.abstractThis study investigates whether large language models (LLMs) exhibit human-like ‘good-enough’ processing patterns in syntactic comprehension or demonstrate mechanical accuracy. Previous research using forced-choice question-answering paradigms revealed that LLMs display incomplete syntactic reanalysis similar to humans when processing garden-path sentences. However, concerns arose that these patterns might reflect methodological artifacts rather than genuine processing characteristics, as direct questioning could bias models toward initial misinterpretations. To address this limitation, we employed a paraphrasing task that requires comprehensive sentence reformulation rather than binary responses, following Patson et al. (2009). We tested GPT-3.5 and GPT-4 on 24 garden-path sentences containing Optionally Transitive (OT) and Reflexive Absolute Transitive (RAT) verbs. Results demonstrate that good-enough processing patterns persist across both paradigms, with LLMs continuing to exhibit partial reanalysis in garden-path conditions even when generating full paraphrases. This confirms that previously observed error patterns represent genuine syntactic processing characteristics rather than experimental artifacts. Notably, GPT-4 showed improved performance in the paraphrasing task compared to forced-choice experiments, suggesting task-dependent variation in processing depth. Both models exhibited human-like incomplete processing despite their substantial computational resources, indicating that their pattern-matching mechanisms favor processing shortcuts over complete syntactic interpretation. These findings reveal that LLMs demonstrate good-enough processing similar to humans, with performance varying systematically across task formats.-
dc.format.extent15-
dc.language영어-
dc.language.isoENG-
dc.publisher한국영어학회-
dc.titleProbing Good-Enough Processing in Large Language Models with a Paraphrasing Task-
dc.typeArticle-
dc.publisher.location대한민국-
dc.identifier.doi10.15738/kjell.26..202601.127-
dc.identifier.scopusid2-s2.0-105030446730-
dc.identifier.bibliographicCitation영어학, v.26, pp 127 - 141-
dc.citation.title영어학-
dc.citation.volume26-
dc.citation.startPage127-
dc.citation.endPage141-
dc.type.docTypeY-
dc.identifier.kciidART003301434-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.description.journalRegisteredClasskci-
dc.subject.keywordAuthorlarge language models-
dc.subject.keywordAuthorgarden-path sentences-
dc.subject.keywordAuthorgood-enough processing-
dc.subject.keywordAuthorsyntactic processing-
dc.subject.keywordAuthorparaphrasing task-
dc.subject.keywordAuthorChatGPT-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Humanities > Division of English Language & Literature > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Shin, Jeong Ah photo

Shin, Jeong Ah
College of Humanities (Division of English Language and Literature)
Read more

Altmetrics

Total Views & Downloads

BROWSE