SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 501550 of 939 papers

TitleStatusHype
SERVAL: Synergy Learning between Vertical Models and LLMs towards Oracle-Level Zero-shot Medical Prediction0
ShotgunWSD: An unsupervised algorithm for global word sense disambiguation inspired by DNA sequencing0
Shrinkage Initialization for Smooth Learning of Neural Networks0
SocialNLP 2018 EmotionX Challenge Overview: Recognizing Emotions in Dialogues0
Soft Label PU Learning0
SOLVE: Synergy of Language-Vision and End-to-End Networks for Autonomous Driving0
Some Extensions of Probabilistic Logic0
Some Preliminary Steps Towards Metaverse Logic0
Sort Story: Sorting Jumbled Images and Captions into Stories0
Spatial Knowledge Graph-Guided Multimodal Synthesis0
SSN-NLP at SemEval-2020 Task 4: Text Classification and Generation on Common Sense Context Using Neural Networks0
Stacking with Auxiliary Features for Visual Question Answering0
Stating the Obvious: Extracting Visual Common Sense Knowledge0
Stay on topic with Classifier-Free Guidance0
Story Comprehension for Predicting What Happens Next0
Story Generation with Commonsense Knowledge Graphs and Axioms0
Strongly-Typed Agents are Guaranteed to Interact Safely0
Structured Event Reasoning with Large Language Models0
Summarize the Past to Predict the Future: Natural Language Descriptions of Context Boost Multimodal Object Interaction Anticipation0
Sunny and Dark Outside?! Improving Answer Consistency in VQA through Entailed Question Generation0
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference0
Sweetening Ontologies cont'd0
Symbol Grounding via Chaining of Morphisms0
Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search0
Systematic Error Analysis of the Stanford Question Answering Dataset0
Tabular Data Imputation: Choose KNN over Deep Learning0
Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models0
TakeLab at SemEval-2017 Task 6: \#RankingHumorIn4Pages0
TakeLab at SemEval-2018 Task12: Argument Reasoning Comprehension with Skip-Thought Vectors0
TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs0
TeamJUST at SemEval-2020 Task 4: Commonsense Validation and Explanation Using Ensembling Techniques0
Telecom Language Models: Must They Be Large?0
Tell Codec What Worth Compressing: Semantically Disentangled Image Coding for Machine with LMMs0
Tell Me Why: Incentivizing Explanations0
Temporal Common Sense Acquisition with Minimal Supervision0
TETRIS: Towards Exploring the Robustness of Interactive Segmentation0
TGEA: An Error-Annotated Dataset and Benchmark Tasks for TextGeneration from Pretrained Language Models0
The Case for a Mixed-Initiative Collaborative Neuroevolution Approach0
The Claude 3 Model Family: Opus, Sonnet, Haiku0
The Collision of Quality and Technology with Reality0
The Computational Principles of Learning Ability0
The Embeddings World and Artificial General Intelligence0
The ILASP system for Inductive Learning of Answer Set Programs0
The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?0
The Neural Metric Factorization for Computational Drug Repositioning0
The Physics of Text: Ontological Realism in Information Extraction0
The Power of Question Translation Training in Multilingual Reasoning: Broadened Scope and Deepened Insights0
The Quest for Visual Understanding: A Journey Through the Evolution of Visual Question Answering0
The RatioLog Project: Rational Extensions of Logical Reasoning0
The Rosetta Paradox: Domain-Specific Performance Inversions in Large Language Models0
Show:102550
← PrevPage 11 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified