Detailed Information

Cited 0 time in webofscience Cited 6 time in scopus
Metadata Downloads

Distilling and Refining Domain-Specific Knowledge for Semi-Supervised Domain Adaptation

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Ju Hyun-
dc.contributor.authorNgo, Ba Hung-
dc.contributor.authorPark, Jae Hyeon-
dc.contributor.authorKwon, Jung Eun-
dc.contributor.authorLee, Ho Sub-
dc.contributor.authorCho, Sung In-
dc.date.accessioned2024-08-08T12:00:36Z-
dc.date.available2024-08-08T12:00:36Z-
dc.date.issued2022-11-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/21906-
dc.description.abstractWe propose a novel framework, Distilling And Refining domain-specific Knowledge (DARK), for Semi-supervised Domain Adaptation (SSDA) tasks. The proposed method consists of three strategies: Multi-view Learning, Distilling, and Refining. In Multi-view Learning, to acquire domain-specific knowledge, DARK trains a shared generator and two domain-specific classifiers using the labeled source and target data. Then, in Distilling, two classifiers exchange the domain-specific knowledge with each other to exploit a cross-view consistency regularization using soft labels between differently augmented unlabeled target samples. During this, DARK leverages information from low-confidence unlabeled target samples in addition to the high-confidence unlabeled target samples. To prevent a trivial collapse problem caused by the low-confidence samples, we propose the utilization of a sample-wise dynamic weight based on prediction reliability (SDWR). Finally, in Refining, for class alignment, class confusion of the unlabeled target data is minimized considering the model maturity. Simultaneously, to maintain model consistency between the predictions of differently augmented unlabeled target samples, a bridging loss with SDWR is used. Consequently, the experimental results on the SSDA datasets demonstrate that DARK outperforms the state-of-the-art benchmark methods for SSDA tasks. The code can be found at https://github.com/Juh-yun/DARK. © 2022. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.-
dc.format.extent14-
dc.language영어-
dc.language.isoENG-
dc.publisherBritish Machine Vision Association, BMVA-
dc.titleDistilling and Refining Domain-Specific Knowledge for Semi-Supervised Domain Adaptation-
dc.typeArticle-
dc.publisher.location영국-
dc.identifier.scopusid2-s2.0-85174716421-
dc.identifier.bibliographicCitationBMVC 2022 - 33rd British Machine Vision Conference Proceedings, pp 1 - 14-
dc.citation.titleBMVC 2022 - 33rd British Machine Vision Conference Proceedings-
dc.citation.startPage1-
dc.citation.endPage14-
dc.type.docTypeConference paper-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassforeign-
dc.subject.keywordAuthorComputer Vision-
dc.subject.keywordAuthorDomain Knowledge-
dc.subject.keywordAuthorRefining-
dc.subject.keywordAuthorDomain Adaptation-
dc.subject.keywordAuthorDomain Specific-
dc.subject.keywordAuthorDomain-specific Knowledge-
dc.subject.keywordAuthorGenerator Domain-
dc.subject.keywordAuthorMulti-view Learning-
dc.subject.keywordAuthorRegularisation-
dc.subject.keywordAuthorSemi-supervised-
dc.subject.keywordAuthorSoft Labels-
dc.subject.keywordAuthorTwo Domains-
dc.subject.keywordAuthorView Consistency-
dc.subject.keywordAuthorKnowledge Management-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE