Cited 0 time in
Data Augmentation Techniques Using Text-to-Image Diffusion Models for Enhanced Data Diversity
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Shin, Jeongmin | - |
| dc.contributor.author | Jang, Hyeryung | - |
| dc.date.accessioned | 2025-03-12T03:00:11Z | - |
| dc.date.available | 2025-03-12T03:00:11Z | - |
| dc.date.issued | 2024 | - |
| dc.identifier.issn | 2162-1233 | - |
| dc.identifier.issn | 2162-1241 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/57910 | - |
| dc.description.abstract | Data augmentation is a widely used technique to enhance the performance of deep learning models. However, traditional augmentation methods, dependent solely on original data, often fall short in maintaining data diversity and generalization capabilities. In this paper, we propose a novel data augmentation approach leveraging pretrained text-to-image diffusion models to generate diverse and contextually rich images. Our approach integrates three advanced techniques: rich-text prompts, multi-object image generation, and inpainting. We demonstrate the effectiveness of these methods through extensive experiments on the Oxford-IIIT Pets and Caltech-101 datasets, where our diffusion-based augmentations significantly improved downstream classification accuracy and model generalization. No-tably, the inpainting technique excels in handling class imbalances by balancing the diversity and structural integrity of original data, while rich-text prompts and multi-object generation offer substantial gains by enhancing diversity and realism. Additionally, our methods show enhanced generalization to unseen data, proving their robustness and applicability to various deep learning tasks. © 2024 IEEE. | - |
| dc.format.extent | 6 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | Data Augmentation Techniques Using Text-to-Image Diffusion Models for Enhanced Data Diversity | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ICTC62082.2024.10827311 | - |
| dc.identifier.scopusid | 2-s2.0-85217671096 | - |
| dc.identifier.bibliographicCitation | 2024 15th International Conference on Information and Communication Technology Convergence (ICTC), pp 2027 - 2032 | - |
| dc.citation.title | 2024 15th International Conference on Information and Communication Technology Convergence (ICTC) | - |
| dc.citation.startPage | 2027 | - |
| dc.citation.endPage | 2032 | - |
| dc.type.docType | Conference paper | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.subject.keywordAuthor | Adversarial Machine Learning | - |
| dc.subject.keywordAuthor | Spatio-temporal Data | - |
| dc.subject.keywordAuthor | Augmentation Methods | - |
| dc.subject.keywordAuthor | Augmentation Techniques | - |
| dc.subject.keywordAuthor | Data Augmentation | - |
| dc.subject.keywordAuthor | Diffusion Model | - |
| dc.subject.keywordAuthor | Generalization Capability | - |
| dc.subject.keywordAuthor | Image Diffusion | - |
| dc.subject.keywordAuthor | Learning Models | - |
| dc.subject.keywordAuthor | Multiobject | - |
| dc.subject.keywordAuthor | Performance | - |
| dc.subject.keywordAuthor | Rich Texts | - |
| dc.subject.keywordAuthor | Contrastive Learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
