Cited 0 time in
Assessing the Structural Profiles of L2-textbook Dataset on Transformer LMs
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | 이재민 | - |
| dc.contributor.author | 박명관 | - |
| dc.date.accessioned | 2024-08-08T10:30:47Z | - |
| dc.date.available | 2024-08-08T10:30:47Z | - |
| dc.date.issued | 2024-01 | - |
| dc.identifier.issn | 1975-8251 | - |
| dc.identifier.issn | 2508-4259 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/21465 | - |
| dc.description.abstract | Transformer-based language models (TLMs) employing multi-head self-attention methods have led to substantial enhancements in performance across diverse domains in Natural Language Processing (NLP). While current TLMs have demonstrated impressive capabilities by training on datasets hundreds of times larger than those akin to children's learning data, BabyBERTa has achieved meaningful performance by leveraging developmentally plausible datasets of comparable size. This study delves into the detailed evaluation of BabyBERTa's performance, aiming to gain insights into TLMs' language acquisition abilities and explore the feasibility of utilizing second language textbook dataset. Our analysis indicates that the dataset encompassing sentences with varied structures can effectively facilitate TLMs in acquiring grammatical knowledge. | - |
| dc.format.extent | 16 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | 한국중원언어학회 | - |
| dc.title | Assessing the Structural Profiles of L2-textbook Dataset on Transformer LMs | - |
| dc.title.alternative | Assessing the Structural Profiles of L2-textbook Dataset on Transformer LMs | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.doi | 10.17002/sil..70.20241.225 | - |
| dc.identifier.bibliographicCitation | 언어학 연구, no.70, pp 225 - 240 | - |
| dc.citation.title | 언어학 연구 | - |
| dc.citation.number | 70 | - |
| dc.citation.startPage | 225 | - |
| dc.citation.endPage | 240 | - |
| dc.identifier.kciid | ART003049277 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.subject.keywordAuthor | transformer language models | - |
| dc.subject.keywordAuthor | language acquisition | - |
| dc.subject.keywordAuthor | grammatical knowledge | - |
| dc.subject.keywordAuthor | textbook dataset | - |
| dc.subject.keywordAuthor | natural language process | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
