SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 701725 of 939 papers

TitleStatusHype
Towards Generalizable Neuro-Symbolic Systems for Commonsense Question Answering0
QASC: A Dataset for Question Answering via Sentence CompositionCode0
Assisting human experts in the interpretation of their visual process: A case study on assessing copper surface adhesive potency0
Learning Continuous 3D Reconstructions for Geometrically Aware Grasping0
Linguistic Embeddings as a Common-Sense Knowledge Repository: Challenges and Opportunities0
Why Does the VQA Model Answer No?: Improving Reasoning through Visual and Linguistic Inference0
Measuring Numerical Common Sense: Is A Word Embedding Approach Effective?0
Conversational AI : Open Domain Question Answering and Commonsense Reasoning0
Bridging Visual Perception with Contextual Semantics for Understanding Robot Manipulation Tasks0
Probabilistic framework for solving Visual Dialog0
Sunny and Dark Outside?! Improving Answer Consistency in VQA through Entailed Question Generation0
Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question AnsweringCode0
Abductive Reasoning as Self-Supervision for Common Sense Question Answering0
An Improved Neural Baseline for Temporal Relation Extraction0
Visual Question Answering using Deep Learning: A Survey and Performance AnalysisCode0
Improving Neural Story Generation by Targeted Common Sense GroundingCode0
DAST Model: Deciding About Semantic Complexity of a Text0
Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models0
Reasoning-Driven Question-Answering for Natural Language Understanding0
Learn How to Cook a New Recipe in a New House: Using Map Familiarization, Curriculum Learning, and Bandit Feedback to Learn Families of Text-Based Adventure GamesCode0
Knowledge Aware Semantic Concept Expansion for Image-Text Matching0
Processamento de linguagem natural em Português e aprendizagem profunda para o domínio de Óleo e Gás0
A Hybrid Neural Network Model for Commonsense Reasoning0
Learning Emphasis Selection for Written Text in Visual Media from Crowd-Sourced Label DistributionsCode0
Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation ExtractionCode0
Show:102550
← PrevPage 29 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified