Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Is c-command Machine-learnable?

Full metadata record
DC Field Value Language
dc.contributor.author신운섭-
dc.contributor.author박명관-
dc.contributor.author송상헌-
dc.date.accessioned2023-04-27T18:40:38Z-
dc.date.available2023-04-27T18:40:38Z-
dc.date.issued2021-03-
dc.identifier.issn1225-7141-
dc.identifier.issn2671-6283-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/5257-
dc.description.abstractMany psycholinguistic studies have tested whether pronouns and polarity items elicit additional processing cost when they are not c-commanded. The previous studies claim that the c-command constraint regulates the distribution of relevant syntactic objects. As such, the syntactic effects of the c-command relation are greatly affected by the types of licensing (e.g. quantificational binding) and reading comprehension patterns of subjects (e.g. linguistic illusion). The present study investigates the reading behavior of the language model BERT when the syntactic processing of relational information (i.e. X c-commands Y) is required. Specifically, our two experiments contrasted the BERT comprehension of a c-commanding licensor versus a non-c-commanding licensor with reflexive anaphora and negative polarity items. The analysis based on the information-theoretic measure of surprisal suggests that violations of the c-command constraint are unexpected for BERT representations. We conclude that deep learning models like BERT can learn the syntactic c-command restriction at least with respect to reflexive anaphors and negative polarity items. At the same time, BERT appeared to have some limitations in its flexibility to apply compensatory pragmatic reasoning when a non-c-commanding licensor intruded in the dependency structure.-
dc.format.extent22-
dc.language영어-
dc.language.isoENG-
dc.publisher대한언어학회-
dc.titleIs c-command Machine-learnable?-
dc.title.alternativeIs c-command Machine-learnable?-
dc.typeArticle-
dc.publisher.location대한민국-
dc.identifier.doi10.24303/lakdoi.2021.29.1.183-
dc.identifier.bibliographicCitation언어학, v.29, no.1, pp 183 - 204-
dc.citation.title언어학-
dc.citation.volume29-
dc.citation.number1-
dc.citation.startPage183-
dc.citation.endPage204-
dc.identifier.kciidART002707313-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClasskci-
dc.subject.keywordAuthorc-command-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorBERT-
dc.subject.keywordAuthorsurprisal-
dc.subject.keywordAuthorNPI-
dc.subject.keywordAuthorreflexive anaphor-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Humanities > Division of English Language & Literature > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Myung Kwan photo

Park, Myung Kwan
College of Humanities (Division of English Language and Literature)
Read more

Altmetrics

Total Views & Downloads

BROWSE