SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 651700 of 939 papers

TitleStatusHype
A Knowledge-Aware Sequence-to-Tree Network for Math Word Problem Solving0
Representation, Learning and Reasoning on Spatial Language for Downstream NLP Tasks0
Machine Reasoning: Technology, Dilemma and Future0
Learning Physical Common Sense as Knowledge Graph Completion via BERT Data Augmentation and Constrained Tucker Factorization0
Dutch Humor Detection by Generating Negative Examples0
GO FIGURE: A Meta Evaluation of Factuality in Summarization0
Thinking Fast and Slow in AI0
Do Language Embeddings Capture Scales?0
Hierarchical Relational Inference0
Creative Captioning: An AI Grand Challenge Based on the Dixit Board Game0
Zero-Shot Learning with Common Sense Knowledge Graphs0
Multi-modal Cooking Workflow Construction for Food Recipes0
Commonsense Knowledge in Wikidata0
Learning Object Placement by Inpainting for Compositional Data Augmentation0
CS-NET at SemEval-2020 Task 4: Siamese BERT for ComVECode0
Understanding Spatial Relations through Multiple Modalities0
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack0
Robustness to Spurious Correlations via Human AnnotationsCode0
Explainable Inference on Sequential Data via Memory-TrackingCode0
LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation using Pretraining Language Model0
Machine Common Sense0
CUHK at SemEval-2020 Task 4: CommonSense Explanation, Reasoning and Prediction with Multi-task Learning0
Consolidating Commonsense Knowledge0
Language Models as Fact Checkers?0
Analogical Proportions0
Fractional trends and cycles in macroeconomic time series0
Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models0
Temporal Common Sense Acquisition with Minimal Supervision0
The Sensitivity of Language Models and Humans to Winograd Schema PerturbationsCode0
The ILASP system for Inductive Learning of Answer Set Programs0
Mandarinograd: A Chinese Collection of Winograd Schemas0
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense0
Ecological Semantics: Programming Environments for Situated Language Understanding0
1D Probabilistic Undersampling Pattern Optimization for MR Image ReconstructionCode0
Active Model Estimation in Markov Decision Processes0
Learning-based Practical Smartphone Eavesdropping with Built-in Accelerometer0
KoGuN: Accelerating Deep Reinforcement Learning via Integrating Human Suboptimal Knowledge0
A Machine Consciousness architecture based on Deep Learning and Gaussian Processes0
Debate Dynamics for Human-comprehensible Fact-checking on Knowledge Graphs0
Using ConceptNet to Teach Common Sense to an Automated Theorem Prover0
A Logical Model for Supporting Social Commonsense Knowledge Acquisition0
Design and Implementation of Linked Planning Domain Definition Language0
That and There: Judging the Intent of Pointing Actions with Robotic ArmsCode0
Generating Interactive Worlds with Text0
CommonGen: A Constrained Text Generation Challenge for Generative Commonsense ReasoningCode0
Why Do Masked Neural Language Models Still Need Common Sense Knowledge?0
KARNA at COIN Shared Task 1: Bidirectional Encoder Representations from Transformers with relational knowledge for machine comprehension with common sense0
Commonsense about Human Senses: Labeled Data Collection Processes0
How Pre-trained Word Representations Capture Commonsense Physical Comparisons0
Pingan Smart Health and SJTU at COIN - Shared Task: utilizing Pre-trained Language Models and Common-sense Knowledge in Machine Reading Tasks0
Show:102550
← PrevPage 14 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified