Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

StyleBoost: A Study of Personalizing Text-to-Image Generation in Any Style using DreamBooth

Full metadata record
DC Field Value Language
dc.contributor.authorPark, Junseo-
dc.contributor.authorKo, Beomseok-
dc.contributor.authorJang, Hyeryung-
dc.date.accessioned2024-08-08T08:31:43Z-
dc.date.available2024-08-08T08:31:43Z-
dc.date.issued2023-
dc.identifier.issn2162-1233-
dc.identifier.issn2162-1241-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/20652-
dc.description.abstractRecent advancements in text-to-image models, such as Stable Diffusion, have demonstrated their ability to synthesize visual images through natural language prompts. One approach of personalizing text-to-image models, exemplified by Dream-Booth, fine-tunes the pre-trained model by binding unique text identifiers with a few images of a specific subject. Although existing fine-tuning methods have demonstrated competence in rendering images asking to the styles of famous painters, it is still challenging to learn to produce images encapsulating distinct art styles due to abstract and broad visual perceptions of stylistic attributes such as lines, shapes, textures, and colors. In this paper, we present a new fine-tuning method, called StyleBoost, that equips pre-trained text-to-image models to produce diverse images in specified styles from text prompts. By leveraging around 15 to 20 images of StyleRef and Aux images each, our approach establishes a foundational binding of a unique token identifier with a broad realm of the target style, where the Aux images is carefully selected to strengthen the binding. This dual-binding strategy grasps the essential concept of art styles and accelerates learning of diverse and comprehensive attributes of the target style. Experimental evaluation conducted on three distinct styles - realism art, SureB art, and anime - demonstrates substantial improvements in both the quality of generated images and the perceptual fidelity metrics, such as FID and CLIP scores. © 2023 IEEE.-
dc.format.extent6-
dc.language영어-
dc.language.isoENG-
dc.publisherIEEE-
dc.titleStyleBoost: A Study of Personalizing Text-to-Image Generation in Any Style using DreamBooth-
dc.typeArticle-
dc.publisher.location미국-
dc.identifier.doi10.1109/ICTC58733.2023.10392676-
dc.identifier.scopusid2-s2.0-85184568897-
dc.identifier.bibliographicCitation2023 14th International Conference on Information and Communication Technology Convergence (ICTC), pp 93 - 98-
dc.citation.title2023 14th International Conference on Information and Communication Technology Convergence (ICTC)-
dc.citation.startPage93-
dc.citation.endPage98-
dc.type.docTypeConference Paper-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordAuthordiffusion models-
dc.subject.keywordAuthorfine-tuning-
dc.subject.keywordAuthorperson-alization-
dc.subject.keywordAuthortext-to-image models-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Jang, Hye Ryung photo

Jang, Hye Ryung
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE