Multitask learning with single gradient step update for task balancingopen access
- Authors
- Lee, Sungjae; Son, Youngdoo
- Issue Date
- Jan-2022
- Publisher
- Elsevier BV
- Keywords
- Convolution neural network; Deep learning; Gradient-based meta learning; Multitask learning
- Citation
- Neurocomputing, v.467, pp 442 - 453
- Pages
- 12
- Indexed
- SCIE
SCOPUS
- Journal Title
- Neurocomputing
- Volume
- 467
- Start Page
- 442
- End Page
- 453
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/3712
- DOI
- 10.1016/j.neucom.2021.10.025
- ISSN
- 0925-2312
1872-8286
- Abstract
- Multitask learning is a methodology to boost generalization performance and also reduce computational intensity and memory usage. However, learning multiple tasks simultaneously can be more difficult than learning a single task because it can cause imbalance among tasks. To address the imbalance problem, we propose an algorithm to balance between tasks at the gradient level by applying gradient-based meta- learning to multitask learning. The proposed method trains shared layers and task-specific layers sepa-rately so that the two layers with different roles in a multitask network can be fitted to their own pur -poses. In particular, the shared layer that contains informative knowledge shared among tasks is trained by employing single gradient step update and inner/outer loop training to mitigate the imbalance problem at the gradient level. We apply the proposed method to various multitask computer vision prob-lems and achieve state-of-the-art performance. CO 2021 Elsevier B.V. All rights reserved.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Engineering > Department of Industrial and Systems Engineering > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.