SOTAVerified

DiffuGen: Adaptable Approach for Generating Labeled Image Datasets using Stable Diffusion Models

2023-09-01Code Available1· sign in to hype

Michael Shenoda, Edward Kim

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generating high-quality labeled image datasets is crucial for training accurate and robust machine learning models in the field of computer vision. However, the process of manually labeling real images is often time-consuming and costly. To address these challenges associated with dataset generation, we introduce "DiffuGen," a simple and adaptable approach that harnesses the power of stable diffusion models to create labeled image datasets efficiently. By leveraging stable diffusion models, our approach not only ensures the quality of generated datasets but also provides a versatile solution for label generation. In this paper, we present the methodology behind DiffuGen, which combines the capabilities of diffusion models with two distinct labeling techniques: unsupervised and supervised. Distinctively, DiffuGen employs prompt templating for adaptable image generation and textual inversion to enhance diffusion model capabilities.

Tasks

Reproductions