SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 901939 of 939 papers

TitleStatusHype
NaturalLI: Natural Logic Inference for Common Sense Reasoning0
The Case for a Mixed-Initiative Collaborative Neuroevolution Approach0
A Rule-Based Approach to Aspect Extraction from Product Reviews0
OPI: Semeval-2014 Task 3 System Description0
Non-Monotonic Reasoning and Story Comprehension0
Knowledge Acquisition Strategies for Goal-Oriented Dialog Systems0
Interactive Learning of Spatial Knowledge for Text to 3D Scene Generation0
Context-based Natural Language Processing for GIS-based Vague Region Visualization0
Semantic Parsing for Text to 3D Scene Generation0
Inducing Neural Models of Script Knowledge0
Open Information Extraction for Spanish Language based on Syntactic Constraints0
Informed Haar-like Features Improve Pedestrian Detection0
Transliteration and alignment of parallel texts from Cyrillic to Latin0
Automatic semantic relation extraction from Portuguese texts0
A Large Scale Database of Strongly-related Events in Japanese0
A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge0
Wikipedia-based Semantic Interpretation for Natural Language Processing0
Using Web Co-occurrence Statistics for Improving Image Categorization0
Learning Semantic Script Knowledge with Event Embeddings0
Event Sequence Model for Semantic Analysis of Time and Location in Dialogue System0
Sweetening Ontologies cont'd0
A Logic-based Approach for Recognizing Textual Entailment Supported by Ontological Background Knowledge0
Combining PCFG-LA Models with Dual Decomposition: A Case Study with Function Labels and Binarization0
Features of Verb Complements in Co-composition: A case study of Chinese baking verb using Weibo corpus0
Transforming the Data Transcription and Analysis Tool Metadata and Labels into a Linguistic Linked Open Data Cloud Resource0
Toward a Better Understanding of Causality between Verbal Events: Extraction and Analysis of the Causal Power of Verb-Verb Associations0
Philosophers are Mortal: Inferring the Truth of Unseen Facts0
Using Conceptual Class Attributes to Characterize Social Media Users0
Is a 204 cm Man Tall or Small ? Acquisition of Numerical Common Sense from the Web0
Probabilistic and Non-Monotonic Inference0
Some Extensions of Probabilistic Logic0
Towards common-sense reasoning via conditional simulation: legacies of Turing in Artificial Intelligence0
Markov Chains for Robust Graph-Based Commonsense Information Extraction0
Sentiment Analysis Using a Novel Human Computation Game0
Learning to ``Read Between the Lines'' using Bayesian Logic Programs0
Towards Distributed MCMC Inference in Probabilistic Knowledge Bases0
Representing General Relational Knowledge in ConceptNet 50
A Tool for Extracting Conversational Implicatures0
Affective Common Sense Knowledge Acquisition for Sentiment Analysis0
Show:102550
← PrevPage 19 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified