Detailed Information

Cited 0 time in webofscience Cited 6 time in scopus
Metadata Downloads

Distilling and Refining Domain-Specific Knowledge for Semi-Supervised Domain Adaptation

Authors
Kim, Ju HyunNgo, Ba HungPark, Jae HyeonKwon, Jung EunLee, Ho SubCho, Sung In
Issue Date
Nov-2022
Publisher
British Machine Vision Association, BMVA
Keywords
Computer Vision; Domain Knowledge; Refining; Domain Adaptation; Domain Specific; Domain-specific Knowledge; Generator Domain; Multi-view Learning; Regularisation; Semi-supervised; Soft Labels; Two Domains; View Consistency; Knowledge Management
Citation
BMVC 2022 - 33rd British Machine Vision Conference Proceedings, pp 1 - 14
Pages
14
Indexed
FOREIGN
Journal Title
BMVC 2022 - 33rd British Machine Vision Conference Proceedings
Start Page
1
End Page
14
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/21906
Abstract
We propose a novel framework, Distilling And Refining domain-specific Knowledge (DARK), for Semi-supervised Domain Adaptation (SSDA) tasks. The proposed method consists of three strategies: Multi-view Learning, Distilling, and Refining. In Multi-view Learning, to acquire domain-specific knowledge, DARK trains a shared generator and two domain-specific classifiers using the labeled source and target data. Then, in Distilling, two classifiers exchange the domain-specific knowledge with each other to exploit a cross-view consistency regularization using soft labels between differently augmented unlabeled target samples. During this, DARK leverages information from low-confidence unlabeled target samples in addition to the high-confidence unlabeled target samples. To prevent a trivial collapse problem caused by the low-confidence samples, we propose the utilization of a sample-wise dynamic weight based on prediction reliability (SDWR). Finally, in Refining, for class alignment, class confusion of the unlabeled target data is minimized considering the model maturity. Simultaneously, to maintain model consistency between the predictions of differently augmented unlabeled target samples, a bridging loss with SDWR is used. Consequently, the experimental results on the SSDA datasets demonstrate that DARK outperforms the state-of-the-art benchmark methods for SSDA tasks. The code can be found at https://github.com/Juh-yun/DARK. © 2022. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE