Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Exploring a New Architecture for Efficient Parameter Fine-Tuning in SLoRA Multitasking Scenarios

Full metadata record
DC Field Value Language
dc.contributor.authorShi, Ce-
dc.contributor.authorJung, Jin-Woo-
dc.date.accessioned2026-03-23T05:30:25Z-
dc.date.available2026-03-23T05:30:25Z-
dc.date.issued2026-03-
dc.identifier.issn2076-3417-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/64037-
dc.description.abstractPropose an enhanced LoRA (Low-Rank Adaptation) MoE (mixed expert) architecture, SLoRA (Enhanced LoRA MoE Architecture), aimed at addressing the key problem of efficient parameter fine-tuning in multitasking scenarios. Given the high cost of traditional full fine-tuning as the parameter size of visual language models increases, and the limitations of LoRA as a popular PEFT (parameter-efficient fine-tuning) method in multitasking, such as inadequate adaptability and difficulty in capturing complex task patterns, as well as the catastrophic forgetting and knowledge fragmentation challenges faced by existing research on integrating mixed expert (MoE) mechanisms into LoRA, SLoRA utilizes orthogonal constraint optimization to reduce disturbance to existing knowledge through constraint solution space initialization, alleviating catastrophic forgetting (old task accuracy retention rate reaches 92.4%, 16.1% higher than LoRA), and an optimized MoE structure that includes general experts (retaining pre-trained knowledge) and task-specific experts (dynamic routing adaptation tasks) to enhance multitask adaptability. Experimental results show that in commonsense reasoning tasks, SLoRA's accuracy is 9.0% higher than LoRA and 3.7% higher than AdaLoRA on the WSC dataset, and its F1 score is 7.7% higher than LoRA and 2.9% higher than AdaLoRA on the CommonsenseQA dataset; in multimodal tasks, its average score is up to 15.3% higher than LoRA, demonstrating significant advantages over existing methods.-
dc.format.extent27-
dc.language영어-
dc.language.isoENG-
dc.publisherMDPI-
dc.titleExploring a New Architecture for Efficient Parameter Fine-Tuning in SLoRA Multitasking Scenarios-
dc.typeArticle-
dc.publisher.location스위스-
dc.identifier.doi10.3390/app16052174-
dc.identifier.scopusid2-s2.0-105032640940-
dc.identifier.wosid001713309900001-
dc.identifier.bibliographicCitationApplied Sciences, v.16, no.5, pp 1 - 27-
dc.citation.titleApplied Sciences-
dc.citation.volume16-
dc.citation.number5-
dc.citation.startPage1-
dc.citation.endPage27-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaChemistry-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaMaterials Science-
dc.relation.journalResearchAreaPhysics-
dc.relation.journalWebOfScienceCategoryChemistry, Multidisciplinary-
dc.relation.journalWebOfScienceCategoryEngineering, Multidisciplinary-
dc.relation.journalWebOfScienceCategoryMaterials Science, Multidisciplinary-
dc.relation.journalWebOfScienceCategoryPhysics, Applied-
dc.subject.keywordAuthorSLoRA-
dc.subject.keywordAuthorPEFT-
dc.subject.keywordAuthormulti task scenarios-
dc.subject.keywordAuthorcatastrophic forgetting-
dc.subject.keywordAuthorfragmentation of knowledge-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Jung, Jin Woo photo

Jung, Jin Woo
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE