SOTAVerified

Visual Reasoning

Ability to understand actions and reasoning associated with any visual images

Papers

Showing 501550 of 698 papers

TitleStatusHype
Analysis of Visual Reasoning on One-Stage Object Detection0
Joint Answering and Explanation for Visual Commonsense ReasoningCode0
Measuring CLEVRness: Blackbox testing of Visual Reasoning Models0
A Review of Emerging Research Directions in Abstract Visual Reasoning0
Grammar-Based Grounded Lexicon Learning0
The Abduction of Sherlock Holmes: A Dataset for Visual Abductive ReasoningCode0
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation ModelsCode3
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning FrameworkCode0
Adaptive Discrete Communication Bottlenecks with Dynamic Vector Quantization0
Deep Learning Methods for Abstract Visual Reasoning: A Survey on Raven's Progressive Matrices0
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and GenerationCode5
Deconfounded Visual GroundingCode0
Comprehensive Visual Question Answering on Point Clouds through Compositional Scene ManipulationCode1
Distilled Dual-Encoder Model for Vision-Language UnderstandingCode1
PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning0
FLAVA: A Foundational Language And Vision Alignment ModelCode1
Robust Visual Reasoning via Language Guided Neural Module Networks0
Recurrent Vision Transformer for Solving Visual Reasoning Problems0
An in-depth experimental study of sensor usage and visual reasoning of robots navigating in real environments0
Two-stage Rule-induction Visual Reasoning on RPMs with an Application to Video Prediction0
Grounded Situation Recognition with TransformersCode1
Co-VQA : Answering by Interactive Sub Question Sequence0
Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual ConceptsCode1
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-ExpertsCode1
An Empirical Study of Training End-to-End Vision-and-Language TransformersCode1
Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language0
Neural-guided, Bidirectional Program Search for Abstraction and Reasoning0
Neural Structure Mapping For Learning Abstract Visual Analogies0
ProTo: Program-Guided Transformer for Program-Guided TasksCode1
Measuring CLEVRness: Black-box Testing of Visual Reasoning Models0
INFERNO: Inferring Object-Centric 3D Scene Representations without Supervision0
Visually Grounded Reasoning across Languages and CulturesCode1
DAReN: A Collaborative Approach Towards Reasoning And Disentangling0
Weakly Supervised Relative Spatial Reasoning for Visual Question AnsweringCode0
VALSE: A Task-Independent Benchmark for Vision and Language Models centered on Linguistic Phenomena0
ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge IntegrationCode1
Image Retrieval on Real-life Images with Pre-trained Vision-and-Language ModelsCode1
Understanding the computational demands underlying visual reasoning0
Align before Fuse: Vision and Language Representation Learning with Momentum DistillationCode1
Enforcing Consistency in Weakly Supervised Semantic ParsingCode0
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-Language Pre-training0
Bottom-Up Shift and Reasoning for Referring Image SegmentationCode0
Explicit Knowledge Incorporation for Visual Reasoning0
Techniques for Symbol Grounding with SATNetCode0
Understanding and Evaluating Racial Biases in Image CaptioningCode1
Referring Transformer: A One-step Approach to Multi-task Visual GroundingCode1
Learning Relation Alignment for Calibrated Cross-modal RetrievalCode1
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-and-Language Pre-training0
Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic ReasoningCode1
Proposal-free One-stage Referring Expression via Grid-Word Cross-Attention0
Show:102550
← PrevPage 11 of 14Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GPT-4o + CAText Score75.5Unverified
2GPT-4V (CoT, pick b/w two options)Text Score75.25Unverified
3GPT-4V (pick b/w two options)Text Score69.25Unverified
4MMICL + CoCoTText Score64.25Unverified
5GPT-4V + CoCoTText Score58.5Unverified
6OpenFlamingo + CoCoTText Score58.25Unverified
7GPT-4VText Score54.5Unverified
8FIBER (EqSim)Text Score51.5Unverified
9FIBER (finetuned, Flickr30k)Text Score51.25Unverified
10MMICL + CCoTText Score51Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3Accuracy91.51Unverified
2X2-VLM (large)Accuracy88.7Unverified
3XFM (base)Accuracy87.6Unverified
4X2-VLM (base)Accuracy86.2Unverified
5CoCaAccuracy86.1Unverified
6VLMoAccuracy85.64Unverified
7VK-OODAccuracy84.6Unverified
8SimVLMAccuracy84.53Unverified
9X-VLM (base)Accuracy84.41Unverified
10VK-OODAccuracy83.9Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3Accuracy92.58Unverified
2X2-VLM (large)Accuracy89.4Unverified
3XFM (base)Accuracy88.4Unverified
4CoCaAccuracy87Unverified
5X2-VLM (base)Accuracy87Unverified
6VLMoAccuracy86.86Unverified
7SimVLMAccuracy85.15Unverified
8X-VLM (base)Accuracy84.76Unverified
9BLIP-129MAccuracy83.09Unverified
10ALBEF (14M)Accuracy82.55Unverified
#ModelMetricClaimedVerifiedStatus
1AI CoreAverage-per ques.95.24Unverified
2redherringAverage-per ques.91.14Unverified
3VRDPAverage-per ques.90.24Unverified
4FightttttAverage-per ques.88.71Unverified
5neuralAverage-per ques.88.27Unverified
6NERVAverage-per ques.88.05Unverified
7DCLAverage-per ques.75.52Unverified
8troublesolverAverage-per ques.73.3Unverified
9v0.1Average-per ques.73.1Unverified
10First_testAverage-per ques.69.65Unverified
#ModelMetricClaimedVerifiedStatus
1Gemini-2.0 + CA2-Class Accuracy93.6Unverified
2GPT-4o + CA2-Class Accuracy92.8Unverified
3Human2-Class Accuracy91Unverified
4SNAIL2-Class Accuracy64Unverified
5InstructBLIP + GPT-42-Class Accuracy63.8Unverified
6BLIP-2 + ChatGPT (Fine-tuned)2-Class Accuracy63.3Unverified
7InstructBLIP + ChatGPT + Neuro-Symbolic2-Class Accuracy55.5Unverified
8ChatCaptioner + ChatGPT2-Class Accuracy49.3Unverified
9Otter2-Class Accuracy49.3Unverified
#ModelMetricClaimedVerifiedStatus
1HumansJaccard Index90Unverified
2ViLT (Zero-Shot)Jaccard Index52Unverified
3X-VLM (Zero-Shot)Jaccard Index46Unverified
4CLIP-ViT-B/32 (Zero-Shot)Jaccard Index41Unverified
5CLIP-ViT-L/14 (Zero-Shot)Jaccard Index40Unverified
6CLIP-RN50x64/14 (Zero-Shot)Jaccard Index38Unverified
7CLIP-RN50 (Zero-Shot)Jaccard Index35Unverified
8CLIP-ViL (Zero-Shot)Jaccard Index15Unverified
#ModelMetricClaimedVerifiedStatus
1LXMERTaccuracy70.1Unverified
2ViLTaccuracy69.3Unverified
3CLIP (finetuned)accuracy65.1Unverified
4CLIP (frozen)accuracy56Unverified
5VisualBERTaccuracy55.2Unverified
#ModelMetricClaimedVerifiedStatus
1RPINAUCCESS42.2Unverified
2Dec[Joint]1fAUCCESS40.3Unverified
3Dynamics-Aware DQNAUCCESS39.9Unverified
4DQNAUCCESS36.8Unverified
#ModelMetricClaimedVerifiedStatus
1Dynamics-Aware DQNAUCCESS85.2Unverified
2RPINAUCCESS85.2Unverified
3Dec[Joint]1fAUCCESS80Unverified
4DQNAUCCESS77.6Unverified
#ModelMetricClaimedVerifiedStatus
1Swin1:1 Accuracy52.9Unverified
2ConvNeXt1:1 Accuracy51.2Unverified
3ViT1:1 Accuracy50.3Unverified
4DEiT1:1 Accuracy47.2Unverified
#ModelMetricClaimedVerifiedStatus
1Humans1-of-100 Accuracy100Unverified
#ModelMetricClaimedVerifiedStatus
1VisualBERTAccuracy (Dev)67.4Unverified