SOTAVerified

Performance of Gaussian Mixture Model Classifiers on Embedded Feature Spaces

2024-10-17Code Available0· sign in to hype

Jeremy Chopin, Rozenn Dahyot

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Data embeddings with CLIP and ImageBind provide powerful features for the analysis of multimedia and/or multimodal data. We assess their performance here for classification using a Gaussian Mixture models (GMMs) based layer as an alternative to the standard Softmax layer. GMMs based classifiers have recently been shown to have interesting performances as part of deep learning pipelines trained end-to-end. Our first contribution is to investigate GMM based classification performance taking advantage of the embedded spaces CLIP and ImageBind. Our second contribution is in proposing our own GMM based classifier with a lower parameters count than previously proposed. Our findings are, that in most cases, on these tested embedded spaces, one gaussian component in the GMMs is often enough for capturing each class, and we hypothesize that this may be due to the contrastive loss used for training these embedded spaces that naturally concentrates features together for each class. We also observed that ImageBind often provides better performance than CLIP for classification of image datasets even when these embedded spaces are compressed using PCA.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10DGMMC-STop 1 Accuracy98.8Unverified
CIFAR-100DGMMC-STop 1 Accuracy91.2Unverified
ESC-50SDGM-DTop 1 Accuracy87Unverified
ImageNetDGMMC-STop 1 Accuracy84.1Unverified
MNISTDGMMC-STop 1 Accuracy70Unverified

Reproductions