SOTAVerified

Compositional Zero-Shot Learning

Compositional Zero-Shot Learning (CZSL) is a computer vision task in which the goal is to recognize unseen compositions fromed from seen state and object during training. The key challenge in CZSL is the inherent entanglement between the state and object within the context of an image. Some example benchmarks for this task are MIT-states, UT-Zappos, and C-GQA. Models are usually evaluated with the Accuracy for both seen and unseen compositions, as well as their Harmonic Mean(HM).

( Image credit: Heosuab )

Papers

Showing 5165 of 65 papers

TitleStatusHype
Feasibility with Language Models for Open-World Compositional Zero-Shot Learning0
Focus-Consistent Multi-Level Aggregation for Compositional Zero-Shot Learning0
HOMOE: A Memory-Based and Composition-Aware Framework for Zero-Shot Learning with Hopfield Network and Soft Mixture of Experts0
Learning Attention Propagation for Compositional Zero-Shot Learning0
Learning Primitive Relations for Compositional Zero-Shot Learning0
Logical Activation Functions: Logit-space equivalents of Probabilistic Boolean Operators0
LOGICZSL: Exploring Logic-induced Representation for Compositional Zero-shot Learning0
MAC: A Benchmark for Multiple Attributes Compositional Zero-Shot Learning0
Mutual Balancing in State-Object Components for Compositional Zero-Shot Learning0
On Leveraging Variational Graph Embeddings for Open World Compositional Zero-Shot Learning0
ProCC: Progressive Cross-primitive Compatibility for Open-World Compositional Zero-Shot Learning0
Prompt Tuning for Zero-shot Compositional Learning0
Separated Inter/Intra-Modal Fusion Prompts for Compositional Zero-Shot Learning0
Simple Primitives with Feasibility- and Contextuality-Dependence for Open-World Compositional Zero-shot Learning0
TsCA: On the Semantic Consistency Alignment via Conditional Transport for Compositional Zero-Shot Learning0
Show:102550
← PrevPage 3 of 3Next →

No leaderboard results yet.