Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Exploring a New Architecture for Efficient Parameter Fine-Tuning in SLoRA Multitasking Scenariosopen access

Authors
Shi, CeJung, Jin-Woo
Issue Date
Mar-2026
Publisher
MDPI
Keywords
SLoRA; PEFT; multi task scenarios; catastrophic forgetting; fragmentation of knowledge
Citation
Applied Sciences, v.16, no.5, pp 1 - 27
Pages
27
Indexed
SCIE
SCOPUS
Journal Title
Applied Sciences
Volume
16
Number
5
Start Page
1
End Page
27
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/64037
DOI
10.3390/app16052174
ISSN
2076-3417
Abstract
Propose an enhanced LoRA (Low-Rank Adaptation) MoE (mixed expert) architecture, SLoRA (Enhanced LoRA MoE Architecture), aimed at addressing the key problem of efficient parameter fine-tuning in multitasking scenarios. Given the high cost of traditional full fine-tuning as the parameter size of visual language models increases, and the limitations of LoRA as a popular PEFT (parameter-efficient fine-tuning) method in multitasking, such as inadequate adaptability and difficulty in capturing complex task patterns, as well as the catastrophic forgetting and knowledge fragmentation challenges faced by existing research on integrating mixed expert (MoE) mechanisms into LoRA, SLoRA utilizes orthogonal constraint optimization to reduce disturbance to existing knowledge through constraint solution space initialization, alleviating catastrophic forgetting (old task accuracy retention rate reaches 92.4%, 16.1% higher than LoRA), and an optimized MoE structure that includes general experts (retaining pre-trained knowledge) and task-specific experts (dynamic routing adaptation tasks) to enhance multitask adaptability. Experimental results show that in commonsense reasoning tasks, SLoRA's accuracy is 9.0% higher than LoRA and 3.7% higher than AdaLoRA on the WSC dataset, and its F1 score is 7.7% higher than LoRA and 2.9% higher than AdaLoRA on the CommonsenseQA dataset; in multimodal tasks, its average score is up to 15.3% higher than LoRA, demonstrating significant advantages over existing methods.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Jung, Jin Woo photo

Jung, Jin Woo
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE