SOTAVerified

Odd One Out

This task tests to what extent a language model is able to identify the odd word.

Source: BIG-bench

Papers

Showing 1120 of 21 papers

TitleStatusHype
Symmetry as a Representation of Intuitive Geometry?0
VICE: Variational Interpretable Concept EmbeddingsCode1
Training Compute-Optimal Large Language ModelsCode6
Scaling Language Models: Methods, Analysis & Insights from Training GopherCode2
Tell me why! Explanations support learning relational and causal structureCode1
Tell me why!—Explanations support learning relational and causal structure0
Odd-One-Out Representation LearningCode0
We Have So Much In Common: Modeling Semantic Relational Set Abstractions in VideosCode1
Do Saliency Models Detect Odd-One-Out Targets? New Datasets and EvaluationsCode1
Effects of Linguistic Labels on Learned Visual Representations in Convolutional Neural Networks: Labels matter!0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.