Multimodal Benchmarking and Recommendation of Text-to-Image Generation Models
Kapil Wanaskar, Gaytri Jena, Magdalini Eirinaki
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/kapilw25/Evaluation_generated_imagesOfficialpytorch★ 1
Abstract
This work presents an open-source unified benchmarking and evaluation framework for text-to-image generation models, with a particular focus on the impact of metadata augmented prompts. Leveraging the DeepFashion-MultiModal dataset, we assess generated outputs through a comprehensive set of quantitative metrics, including Weighted Score, CLIP (Contrastive Language Image Pre-training)-based similarity, LPIPS (Learned Perceptual Image Patch Similarity), FID (Frechet Inception Distance), and retrieval-based measures, as well as qualitative analysis. Our results demonstrate that structured metadata enrichments greatly enhance visual realism, semantic fidelity, and model robustness across diverse text-to-image architectures. While not a traditional recommender system, our framework enables task-specific recommendations for model selection and prompt design based on evaluation metrics.