SOTAVerified

Data Distribution Distilled Generative Model for Generalized Zero-Shot Recognition

2024-02-18Code Available0· sign in to hype

Yijie Wang, Mingjian Hong, Luwen Huangfu, Sheng Huang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In the realm of Zero-Shot Learning (ZSL), we address biases in Generalized Zero-Shot Learning (GZSL) models, which favor seen data. To counter this, we introduce an end-to-end generative GZSL framework called D^3GZSL. This framework respects seen and synthesized unseen data as in-distribution and out-of-distribution data, respectively, for a more balanced model. D^3GZSL comprises two core modules: in-distribution dual space distillation (ID^2SD) and out-of-distribution batch distillation (O^2DBD). ID^2SD aligns teacher-student outcomes in embedding and label spaces, enhancing learning coherence. O^2DBD introduces low-dimensional out-of-distribution representations per batch sample, capturing shared structures between seen and unseen categories. Our approach demonstrates its effectiveness across established GZSL benchmarks, seamlessly integrating into mainstream generative frameworks. Extensive experiments consistently showcase that D^3GZSL elevates the performance of existing generative GZSL methods, underscoring its potential to refine zero-shot learning practices.The code is available at: https://github.com/PJBQ/D3GZSL.git

Tasks

Reproductions