SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 701750 of 939 papers

TitleStatusHype
Multimodal Frame Identification with Multilingual Evaluation0
Multimodal Sentiment Analysis with Common-sense Modulation0
Multi Task Inverse Reinforcement Learning for Common Sense Reward0
Multi-Task Learning and Adapted Knowledge Models for Emotion-Cause Extraction0
Multi-turn Response Selection with Commonsense-enhanced Language Models0
Mxgra at SemEval-2020 Task 4: Common Sense Making with Next Token Prediction0
Native Chinese Reader: A Dataset Towards Native-Level Chinese Machine Reading Comprehension0
NaturalLI: Natural Logic Inference for Common Sense Reasoning0
Navigating Semantic Relations: Challenges for Language Models in Abstract Common-Sense Reasoning0
Neural NID Rules0
Neural Task Planning with And-Or Graph Representations0
Neuro-Symbolic Learning: Principles and Applications in Ophthalmology0
NEWSKVQA: Knowledge-Aware News Video Question Answering0
NEWTON: Are Large Language Models Capable of Physical Reasoning?0
NLP@JUST at SemEval-2020 Task 4: Ensemble Technique for BERT and Roberta to Evaluate Commonsense Validation0
Non-Monotonic Reasoning and Story Comprehension0
Not-so fine-tuning: Measures of Common Sense for Language Models0
NTSEBENCH: Cognitive Reasoning Benchmark for Vision Language Models0
Offline Inverse Constrained Reinforcement Learning for Safe-Critical Decision Making in Healthcare0
Online Knowledge Integration for 3D Semantic Mapping: A Survey0
Online learnability of Statistical Relational Learning in anomaly detection0
On Reality and the Limits of Language Data: Aligning LLMs with Human Norms0
On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations0
On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations0
On the Multiple Roles of Ontologies in Explainable AI0
On Utilizing Relationships for Transferable Few-Shot Fine-Grained Object Detection0
Open Information Extraction for Spanish Language based on Syntactic Constraints0
OPI: Semeval-2014 Task 3 System Description0
Orca 2: Teaching Small Language Models How to Reason0
Ordinal Common-sense Inference0
OSoRA: Output-Dimension and Singular-Value Initialized Low-Rank Adaptation0
PaLM 2 Technical Report0
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack0
PASTA: A Dataset for Modeling Participant States in Narratives0
Path-Consistency: Prefix Enhancement for Efficient Inference in LLM0
Penetrative AI: Making LLMs Comprehend the Physical World0
Perplexity from PLM Is Unreliable for Evaluating Text Quality0
Personalized Causal Graph Reasoning for LLMs: A Case Study on Dietary Recommendations0
Philosophers are Mortal: Inferring the Truth of Unseen Facts0
PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding0
Picturing Ambiguity: A Visual Twist on the Winograd Schema Challenge0
Pingan Smart Health and SJTU at COIN - Shared Task: utilizing Pre-trained Language Models and Common-sense Knowledge in Machine Reading Tasks0
Planning Automated Driving with Accident Experience Referencing and Common-sense Inferencing0
CAPE: Corrective Actions from Precondition Errors using Large Language Models0
Plant in Cupboard, Orange on Rably, Inat Aphone. Benchmarking Incremental Learning of Situation and Language Model using a Text-Simulated Situated Environment0
Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable Semantic Segmentation0
Playing Text-Based Games with Common Sense0
PlugMed: Improving Specificity in Patient-Centered Medical Dialogue Generation using In-Context Learning0
Potential and Limits of Using Post-edits as Reference Translations for MT Evaluation0
Predicting Numerals in Natural Language Text Using a Language Model Considering the Quantitative Aspects of Numerals0
Show:102550
← PrevPage 15 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified