Cited 2 time in
Improved Knowledge Transfer for Semi-supervised Domain Adaptation via Trico Training Strategy
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ngo, Ba Hung | - |
| dc.contributor.author | Chae, Yeon Jeong | - |
| dc.contributor.author | Kwon, Jung Eun | - |
| dc.contributor.author | Park, Jae Hyeon | - |
| dc.contributor.author | Cho, Sung In | - |
| dc.date.accessioned | 2025-03-05T01:43:14Z | - |
| dc.date.available | 2025-03-05T01:43:14Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.issn | 1550-5499 | - |
| dc.identifier.issn | 2380-7504 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/57836 | - |
| dc.description.abstract | The motivation of the semi-supervised domain adaptation (SSDA) is to train a model by leveraging knowledge acquired from the plentiful labeled source combined with extremely scarce labeled target data to achieve the lowest error on the unlabeled target data at the testing time. However, due to inter-domain and intra-domain discrepancies, the improvement of classification accuracy is limited. To solve these, we propose the Trico-training method that utilizes a multilayer perceptron (MLP) classifier and two graph convolutional network (GCN) classifiers called interview GCN and intra-view GCN classifiers. The first cotraining strategy exploits a correlation between MLP and inter-view GCN classifiers to minimize the inter-domain discrepancy, in which the inter-view GCN classifier provides its pseudo labels to teach the MLP classifier, which encourages class representation alignment across domains. In contrast, the MLP classifier gives feedback to the inter-view GCN classifier by using a new concept, 'pseudo- edge', for neighbor's feature aggregation. Doing this increases the data structure mining ability of the inter-view GCN classifier; thus, the quality of generated pseudo labels is improved. The second co-training strategy between MLP and intra-view GCN is conducted in a similar way to reduce the intra-domain discrepancy by enhancing the correlation between labeled and unlabeled target data. Due to an imbalance in classification accuracy between inter-view and intra-view GCN classifiers, we propose the third co-training strategy that encourages them to cooperate to address this problem. We verify the effectiveness of the proposed method on three standard SSDA benchmark datasets: Office-31, Office-Home, and DomainNet. The extended experimental results show that our method surpasses the prior state-of-the-art approaches in SSDA. | - |
| dc.format.extent | 10 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | Improved Knowledge Transfer for Semi-supervised Domain Adaptation via Trico Training Strategy | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ICCV51070.2023.01760 | - |
| dc.identifier.scopusid | 2-s2.0-85185866746 | - |
| dc.identifier.wosid | 001169500503072 | - |
| dc.identifier.bibliographicCitation | 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp 19157 - 19166 | - |
| dc.citation.title | 2023 IEEE/CVF International Conference on Computer Vision (ICCV) | - |
| dc.citation.startPage | 19157 | - |
| dc.citation.endPage | 19166 | - |
| dc.type.docType | Proceedings Paper | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
| dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
