Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Evaluating L2 Training Methods in Neural Language Models

Full metadata record
DC Field Value Language
dc.contributor.author이재민-
dc.contributor.author신정아-
dc.date.accessioned2025-01-15T06:30:17Z-
dc.date.available2025-01-15T06:30:17Z-
dc.date.issued2024-12-
dc.identifier.issn0254-4474-
dc.identifier.issn2586-7113-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/56755-
dc.description.abstractRecent advancements in language models (LMs) have significantly improved language processing capabilities; however, these models remain less efficient than human learning, especially when trained on developmentally plausible data volumes similar to those encountered by children (Warstadt & Bowman, 2022; Linzen, 2020). The inefficiency is even more pronounced in second language (L2) acquisition contexts, where cross-linguistic transfer is a key phenomenon (Papadimitriou & Jurafsky, 2020; Yadavalli et al., 2023). This study evaluates L2 training methods in neural language models by examining mutual L1-L2 influences during learning with developmentally plausible data volumes. We propose two approaches to mitigate catastrophic forgetting: the One-Stage Training (OST) method, which integrates L1 and L2 learning into a single stage, and the One-Stage Mixed Training (OSMT) method, which refines OST by incorporating L1 data into the L2 stage for more realistic simulation of bilingual learning. Through continuous syntactic evaluations throughout training, we analyzed how L1 performance changes during L2 acquisition and how cross-linguistics transfer emerges in Korean and English. The results indicate that OST and OSMT effectively mitigated catastrophic forgetting and supported more stable learning compared to the conventional Two-Stage Training method. OSMT achieved superior integration of L1 and L2 structures while revealing negative transfer effects from Korean (L1) to English (L2). These findings provide valuable insights into both neural model training and human-like L2 acquisition processes.-
dc.format.extent23-
dc.language영어-
dc.language.isoENG-
dc.publisher서울대학교 언어교육원-
dc.titleEvaluating L2 Training Methods in Neural Language Models-
dc.typeArticle-
dc.publisher.location대한민국-
dc.identifier.doi10.30961/lr.2024.60.3.323-
dc.identifier.bibliographicCitation어학연구, v.60, no.3, pp 323 - 345-
dc.citation.title어학연구-
dc.citation.volume60-
dc.citation.number3-
dc.citation.startPage323-
dc.citation.endPage345-
dc.identifier.kciidART003158471-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClasskci-
dc.subject.keywordAuthordevelopmentally plausible data-
dc.subject.keywordAuthorcross-linguistic transfer-
dc.subject.keywordAuthorsecond language acquisition-
dc.subject.keywordAuthorneural language models-
dc.subject.keywordAuthorL2 language models-
dc.subject.keywordAuthorcatastrophic forgetting-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Humanities > Division of English Language & Literature > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Shin, Jeong Ah photo

Shin, Jeong Ah
College of Humanities (Division of English Language and Literature)
Read more

Altmetrics

Total Views & Downloads

BROWSE