Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Uncertainty-Aware Active Meta-Learning for Few-Shot Text Classificationopen access

Authors
Seo, SanghyunDingeto, HiskiasKim, Juntae
Issue Date
Mar-2025
Publisher
MDPI
Keywords
meta-learning; uncertainty quantification; natural language understanding; natural language processing
Citation
Applied Sciences, v.15, no.7, pp 1 - 19
Pages
19
Indexed
SCIE
SCOPUS
Journal Title
Applied Sciences
Volume
15
Number
7
Start Page
1
End Page
19
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/58235
DOI
10.3390/app15073702
ISSN
2076-3417
2076-3417
Abstract
Low-resource natural language understanding is one of the challenges in the field of language understanding. As natural language processing and natural language understanding take center stage in machine learning, these challenges need solutions more than ever. This paper introduces the technique of Uncertainty-Aware Active Meta-Learning (UA-AML), a methodology designed to enhance the efficiency of models in low-resource natural language understanding tasks. This methodology is particularly significant in the context of limited data availability, a common challenge in the field of natural language processing. Uncertainty-Aware Active Meta-Learning enables the selection of high-quality tasks from a diverse range of task data available during the learning process. By quantifying the prediction uncertainty of the model for the input data, we provide a loss function and learning strategy that can adjust the influence of the input data on the model's learning. This approach ensures that the most relevant and informative data are utilized during the learning process, optimizing learning efficiency and model performance. We have applied this meta-learning technique to tasks in low-resource natural language understanding, such as few-shot relation classification and few-shot sentiment classification. Our experimental results, which are carried out on the Amazon Review Sentiment Classification (ARSC) and the FewRel dataset, demonstrate that this technique can construct low-resource natural language understanding models with improved performance, providing a robust solution for tasks with limited data availability. This research contributes to the expansion of meta-learning techniques beyond their traditional application in computer vision, demonstrating their potential in natural language processing. Our findings suggest that this methodology can be effectively utilized in a wider range of application areas, opening new avenues for future research in low-resource natural language understanding.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Jun Tae photo

Kim, Jun Tae
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE