Cited 0 time in
Exploring Kolmogorov-Arnold Network Expansions in Vision Transformers for Mitigation of Catastrophic Forgetting in Continual Learning
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ullah, Zahid | - |
| dc.contributor.author | Kim, Jihie | - |
| dc.date.accessioned | 2025-10-15T07:00:11Z | - |
| dc.date.available | 2025-10-15T07:00:11Z | - |
| dc.date.issued | 2025-09 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/61782 | - |
| dc.description.abstract | Continual Learning (CL), the ability of a model to learn new tasks without forgetting previously acquired knowledge, remains a critical challenge in artificial intelligence. This is particularly true for Vision Transformers (ViTs) that utilize Multilayer Perceptrons (MLPs) for global representation learning. Catastrophic forgetting, where new information overwrites prior knowledge, is especially problematic in these models. This research proposes the replacement of MLPs in ViTs with Kolmogorov-Arnold Networks (KANs) to address this issue. KANs leverage local plasticity through spline-based activations, ensuring that only a subset of parameters is updated per sample, thereby preserving previously learned knowledge. This study investigates the efficacy of KAN-based ViTs in CL scenarios across various benchmark datasets (MNIST, CIFAR100, and TinyImageNet-200), focusing on this approach's ability to retain accuracy on earlier tasks while adapting to new ones. Our experimental results demonstrate that KAN-based ViTs significantly mitigate catastrophic forgetting, outperforming traditional MLP-based ViTs in both knowledge retention and task adaptation. This novel integration of KANs into ViTs represents a promising step toward more robust and adaptable models for dynamic environments. | - |
| dc.format.extent | 29 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Exploring Kolmogorov-Arnold Network Expansions in Vision Transformers for Mitigation of Catastrophic Forgetting in Continual Learning | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/math13182988 | - |
| dc.identifier.scopusid | 2-s2.0-105017254710 | - |
| dc.identifier.wosid | 001580514400001 | - |
| dc.identifier.bibliographicCitation | Mathematics, v.13, no.18, pp 1 - 29 | - |
| dc.citation.title | Mathematics | - |
| dc.citation.volume | 13 | - |
| dc.citation.number | 18 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 29 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Mathematics | - |
| dc.relation.journalWebOfScienceCategory | Mathematics | - |
| dc.subject.keywordAuthor | Kolmogorov-Arnold network | - |
| dc.subject.keywordAuthor | continual learning | - |
| dc.subject.keywordAuthor | catastrophic forgetting | - |
| dc.subject.keywordAuthor | Vision Transformers | - |
| dc.subject.keywordAuthor | deep learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
