Cited 6 time in
Distilling and Refining Domain-Specific Knowledge for Semi-Supervised Domain Adaptation
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Ju Hyun | - |
| dc.contributor.author | Ngo, Ba Hung | - |
| dc.contributor.author | Park, Jae Hyeon | - |
| dc.contributor.author | Kwon, Jung Eun | - |
| dc.contributor.author | Lee, Ho Sub | - |
| dc.contributor.author | Cho, Sung In | - |
| dc.date.accessioned | 2024-08-08T12:00:36Z | - |
| dc.date.available | 2024-08-08T12:00:36Z | - |
| dc.date.issued | 2022-11 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/21906 | - |
| dc.description.abstract | We propose a novel framework, Distilling And Refining domain-specific Knowledge (DARK), for Semi-supervised Domain Adaptation (SSDA) tasks. The proposed method consists of three strategies: Multi-view Learning, Distilling, and Refining. In Multi-view Learning, to acquire domain-specific knowledge, DARK trains a shared generator and two domain-specific classifiers using the labeled source and target data. Then, in Distilling, two classifiers exchange the domain-specific knowledge with each other to exploit a cross-view consistency regularization using soft labels between differently augmented unlabeled target samples. During this, DARK leverages information from low-confidence unlabeled target samples in addition to the high-confidence unlabeled target samples. To prevent a trivial collapse problem caused by the low-confidence samples, we propose the utilization of a sample-wise dynamic weight based on prediction reliability (SDWR). Finally, in Refining, for class alignment, class confusion of the unlabeled target data is minimized considering the model maturity. Simultaneously, to maintain model consistency between the predictions of differently augmented unlabeled target samples, a bridging loss with SDWR is used. Consequently, the experimental results on the SSDA datasets demonstrate that DARK outperforms the state-of-the-art benchmark methods for SSDA tasks. The code can be found at https://github.com/Juh-yun/DARK. © 2022. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. | - |
| dc.format.extent | 14 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | British Machine Vision Association, BMVA | - |
| dc.title | Distilling and Refining Domain-Specific Knowledge for Semi-Supervised Domain Adaptation | - |
| dc.type | Article | - |
| dc.publisher.location | 영국 | - |
| dc.identifier.scopusid | 2-s2.0-85174716421 | - |
| dc.identifier.bibliographicCitation | BMVC 2022 - 33rd British Machine Vision Conference Proceedings, pp 1 - 14 | - |
| dc.citation.title | BMVC 2022 - 33rd British Machine Vision Conference Proceedings | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 14 | - |
| dc.type.docType | Conference paper | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | foreign | - |
| dc.subject.keywordAuthor | Computer Vision | - |
| dc.subject.keywordAuthor | Domain Knowledge | - |
| dc.subject.keywordAuthor | Refining | - |
| dc.subject.keywordAuthor | Domain Adaptation | - |
| dc.subject.keywordAuthor | Domain Specific | - |
| dc.subject.keywordAuthor | Domain-specific Knowledge | - |
| dc.subject.keywordAuthor | Generator Domain | - |
| dc.subject.keywordAuthor | Multi-view Learning | - |
| dc.subject.keywordAuthor | Regularisation | - |
| dc.subject.keywordAuthor | Semi-supervised | - |
| dc.subject.keywordAuthor | Soft Labels | - |
| dc.subject.keywordAuthor | Two Domains | - |
| dc.subject.keywordAuthor | View Consistency | - |
| dc.subject.keywordAuthor | Knowledge Management | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
