SOTAVerified

Simple Semi-supervised Knowledge Distillation from Vision-Language Models via Dual-Head Optimization

2025-05-12Code Available0· sign in to hype

Seongjae Kang, Dong Bok Lee, Hyungjoon Jang, Sung Ju Hwang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Vision-language models (VLMs) have achieved remarkable success across diverse tasks by leveraging rich textual information with minimal labeled data. However, deploying such large models remains challenging, particularly in resource-constrained environments. Knowledge distillation (KD) offers a well-established solution to this problem; however, recent KD approaches from VLMs often involve multi-stage training or additional tuning, increasing computational overhead and optimization complexity. In this paper, we propose Dual-Head Optimization (DHO) -- a simple yet effective KD framework that transfers knowledge from VLMs to compact, task-specific models in semi-supervised settings. Specifically, we introduce dual prediction heads that independently learn from labeled data and teacher predictions, and propose to linearly combine their outputs during inference. We observe that DHO mitigates gradient conflicts between supervised and distillation signals, enabling more effective feature learning than single-head KD baselines. As a result, extensive experiments show that DHO consistently outperforms baselines across multiple domains and fine-grained datasets. Notably, on ImageNet, it achieves state-of-the-art performance, improving accuracy by 3% and 0.1% with 1% and 10% labeled data, respectively, while using fewer parameters.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNet - 10% labeled dataDHO (ViT-Large)Top 1 Accuracy85.9Unverified
ImageNet - 10% labeled dataDHO (ViT-Base)Top 1 Accuracy82.8Unverified
ImageNet - 1% labeled dataDHO (ViT-Large)Top 1 Accuracy84.6Unverified
ImageNet - 1% labeled dataDHO (ViT-Base)Top 1 Accuracy81.6Unverified

Reproductions