SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 251300 of 939 papers

TitleStatusHype
Knowledge-Driven Robot Program Synthesis from Human VR DemonstrationsCode0
Common Sense Bias in Semantic Role LabelingCode0
KnowZRel: Common Sense Knowledge-based Zero-Shot Relationship Retrieval for Generalised Scene Graph GenerationCode0
The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit WarrantsCode0
Asking the Right Question: Inferring Advice-Seeking Intentions from Personal NarrativesCode0
CommonGen: A Constrained Text Generation Challenge for Generative Commonsense ReasoningCode0
“It doesn’t look good for a date”: Transforming Critiques into Preferences for Conversational Recommendation SystemsCode0
KC-ISA: An Implicit Sentiment Analysis Model Combining Knowledge Enhancement and Context FeaturesCode0
iREL at SemEval-2024 Task 9: Improving Conventional Prompting Methods for Brain TeasersCode0
Is "My Favorite New Movie" My Favorite Movie? Probing the Understanding of Recursive Noun PhrasesCode0
Collaborative Synthesis of Patient Records through Multi-Visit Health State InferenceCode0
A Simple Method for Commonsense ReasoningCode0
"It doesn't look good for a date": Transforming Critiques into Preferences for Conversational Recommendation SystemsCode0
CODAH: An Adversarially-Authored Question Answering Dataset for Common SenseCode0
Incorporating Chinese Characters of Words for Lexical Sememe PredictionCode0
CLDR: Contrastive Learning Drug Response Models from Natural Language SupervisionCode0
Inferring spatial relations from textual descriptions of imagesCode0
Improved Word Representation Learning with SememesCode0
Improving Neural Story Generation by Targeted Common Sense GroundingCode0
CITE: A Corpus of Image-Text Discourse RelationsCode0
A Language Agent for Autonomous DrivingCode0
Improving Sample Efficiency of Reinforcement Learning with Background Knowledge from Large Language ModelsCode0
Information Gain Is Not All You NeedCode0
Hybrid Reasoning Based on Large Language Models for Autonomous Car DrivingCode0
Identifying relevant common sense information in knowledge graphsCode0
Human-AI collectives produce the most accurate differential diagnosesCode0
HL Dataset: Visually-grounded Description of Scenes, Actions and RationalesCode0
Hierarchical Spatial Proximity Reasoning for Vision-and-Language NavigationCode0
GIST at SemEval-2018 Task 12: A network transferring inference knowledge to Argument Reasoning Comprehension taskCode0
Empirical Analysis of Foundational Distinctions in Linked Open DataCode0
Embodied Image Quality Assessment for Robotic IntelligenceCode0
CODAH: An Adversarially Authored Question-Answer Dataset for Common SenseCode0
Embarrassingly Simple Performance Prediction for Abductive Natural Language InferenceCode0
GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model AgentsCode0
Acquiring Common Sense Spatial Knowledge through Implicit Spatial TemplatesCode0
Eliciting Knowledge from Large Pre-Trained Models for Unsupervised Knowledge-Grounded ConversationCode0
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational ComplexityCode0
Elaboration-Generating Commonsense Question Answering at ScaleCode0
Garbage in, garbage out: Zero-shot detection of crime using Large Language ModelsCode0
Editing Common Sense in TransformersCode0
AILS-NTUA at SemEval-2024 Task 9: Cracking Brain Teasers: Transformer Models for Lateral Thinking PuzzlesCode0
Frame- and Entity-Based Knowledge for Common-Sense Argumentative ReasoningCode0
DynMoLE: Boosting Mixture of LoRA Experts Fine-Tuning with a Hybrid Routing MechanismCode0
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language ModelsCode0
FLIP Reasoning ChallengeCode0
Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question AnsweringCode0
Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language ModelsCode0
Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLMCode0
Extracting Commonsense Properties from Embeddings with Limited Human GuidanceCode0
Don't Fight Hallucinations, Use Them: Estimating Image Realism using NLI over Atomic FactsCode0
Show:102550
← PrevPage 6 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified