SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 826850 of 939 papers

TitleStatusHype
An Aposteriorical Clusterability Criterion for k-Means++ and Simplicity of Clustering0
Improving Implicit Semantic Role Labeling by Predicting Semantic Frame Arguments0
A Service-Oriented Architecture for Assisting the Authoring of Semantic Crowd Maps0
IIT (BHU): System Description for LSDSem'17 Shared Task0
Aspect Extraction from Product Reviews Using Category Hierarchy Information0
Behind the Scenes of an Evolving Event Cloze Test0
Probabilistic Inference for Cold Start Knowledge Base Population with Prior World Knowledge0
TTCS^: a Vectorial Resource for Computing Conceptual Similarity0
Cooperating with Machines0
Symbol Grounding via Chaining of Morphisms0
Strongly-Typed Agents are Guaranteed to Interact Safely0
Modeling Semantic Expectation: Using Script Knowledge for Referent Prediction0
Investigating the Application of Common-Sense Knowledge-Base for Identifying Term Obfuscation in Adversarial Communication0
Minimally Naturalistic Artificial Intelligence0
Handling Multiword Expressions in Causality Estimation0
An Evaluation of PredPatt and Open IE via Stage 1 Semantic Role LabelingCode0
Quantifier Scoping and Semantic Preferences0
Correcting ContradictionsCode0
Ambiguss, a game for building a Sense Annotated Corpus for French0
Large-Scale Acquisition of Commonsense Knowledge via a Quiz Game on a Dialogue System0
Incremental Fine-grained Information Status Classification Using Attention-based LSTMs0
Automatic Evaluation of Commonsense Knowledge for Refining Japanese ConceptNet0
Learning from Maps: Visual Common Sense for Autonomous Driving0
Ordinal Common-sense Inference0
Resolving Language and Vision Ambiguities Together: Joint Segmentation \& Prepositional Attachment Resolution in Captioned Scenes0
Show:102550
← PrevPage 34 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified