Cited 4 time in
SCoFT: Self-Contrastive Fine-Tuning for Equitable Image Generation
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Liu, Zhixuan | - |
| dc.contributor.author | Schaldenbrand, Peter | - |
| dc.contributor.author | Okogwu, Beverley-Claire | - |
| dc.contributor.author | Peng, Wenxuan | - |
| dc.contributor.author | Yun, Youngsik | - |
| dc.contributor.author | Hundt, Andrew | - |
| dc.contributor.author | Kim, Jihie | - |
| dc.contributor.author | Oh, Jean | - |
| dc.date.accessioned | 2025-01-21T03:30:10Z | - |
| dc.date.available | 2025-01-21T03:30:10Z | - |
| dc.date.issued | 2024 | - |
| dc.identifier.issn | 1063-6919 | - |
| dc.identifier.issn | 2575-7075 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/57538 | - |
| dc.description.abstract | Accurate representation in media is known to improve the well-being of the people who consume it. Generative image models trained on large web-crawled datasets such as LAION are known to produce images with harmful stereotypes and misrepresentations of cultures. We improve inclusive representation in generated images by (1) engaging with communities to collect a culturally representative dataset that we call the Cross-Cultural Under-standing Benchmark (CCUB) and (2) proposing a novel Self- Contrastive Fine-Tuning (SCoFT, pronounced /soft/) method that leverages the model's known biases to self-improve. SCoFT is designed to prevent overfitting on small datasets, encode only high-level information from the data, and shift the generated distribution away from misrepresentations encoded in a pretrained model. Our user study conducted on 51 participants from 5 different countries based on their self-selected national cultural affiliation shows that fine-tuning on CCUB consistently generates images with higher cultural relevance and fewer stereotypes when compared to the Stable Diffusion baseline, which is further improved with our SCoFT technique. Resources and code are at https://ariannaliu.github.io/SCoFT. | - |
| dc.format.extent | 11 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | SCoFT: Self-Contrastive Fine-Tuning for Equitable Image Generation | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/CVPR52733.2024.01029 | - |
| dc.identifier.scopusid | 2-s2.0-85203188037 | - |
| dc.identifier.wosid | 001342442402017 | - |
| dc.identifier.bibliographicCitation | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 10822 - 10832 | - |
| dc.citation.title | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) | - |
| dc.citation.startPage | 10822 | - |
| dc.citation.endPage | 10832 | - |
| dc.type.docType | Proceedings Paper | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
| dc.subject.keywordAuthor | Computer Vision For Social Good | - |
| dc.subject.keywordAuthor | Image Synthesis | - |
| dc.subject.keywordAuthor | Computer Vision For Social Good | - |
| dc.subject.keywordAuthor | Cultural Understanding | - |
| dc.subject.keywordAuthor | Fine Tuning | - |
| dc.subject.keywordAuthor | High-level Information | - |
| dc.subject.keywordAuthor | Image Generations | - |
| dc.subject.keywordAuthor | Image Modeling | - |
| dc.subject.keywordAuthor | Images Synthesis | - |
| dc.subject.keywordAuthor | Overfitting | - |
| dc.subject.keywordAuthor | Small Data Set | - |
| dc.subject.keywordAuthor | Well Being | - |
| dc.subject.keywordAuthor | Image Representation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
