SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 901939 of 939 papers

TitleStatusHype
CLDR: Contrastive Learning Drug Response Models from Natural Language SupervisionCode0
Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human NeedsCode0
SocialIQA: Commonsense Reasoning about Social InteractionsCode0
The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit WarrantsCode0
GIST at SemEval-2018 Task 12: A network transferring inference knowledge to Argument Reasoning Comprehension taskCode0
1D Probabilistic Undersampling Pattern Optimization for MR Image ReconstructionCode0
GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model AgentsCode0
The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine TranslationCode0
Deep contextualized word representations for detecting sarcasm and ironyCode0
Recognition of Sarcasms in Tweets Based on Concept Level Sentiment Analysis and Supervised Learning ApproachesCode0
Garbage in, garbage out: Zero-shot detection of crime using Large Language ModelsCode0
Declarative Reasoning on Explanations Using Constraint Logic ProgrammingCode0
Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent NetworkCode0
Unveiling LLMs: The Evolution of Latent Representations in a Dynamic Knowledge GraphCode0
DCQA: Document-Level Chart Question Answering towards Complex Reasoning and Common-Sense UnderstandingCode0
Muppet: Massive Multi-task Representations with Pre-FinetuningCode0
Frame- and Entity-Based Knowledge for Common-Sense Argumentative ReasoningCode0
CS-NET at SemEval-2020 Task 4: Siamese BERT for ComVECode0
My Teacher Thinks The World Is Flat! Interpreting Automatic Essay Scoring MechanismCode0
CrossCat: A Fully Bayesian Nonparametric Method for Analyzing Heterogeneous, High Dimensional DataCode0
Learning to Predict Concept Ordering for Common Sense GenerationCode0
A Content-Based Novelty Measure for Scholarly Publications: A Proof of ConceptCode0
A surprisal oracle for when every layer countsCode0
AlignedCoT: Prompting Large Language Models via Native-Speaking DemonstrationsCode0
Visual Coreference Resolution in Visual Dialog using Neural Module NetworksCode0
FLIP Reasoning ChallengeCode0
Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their InteractionsCode0
Correcting ContradictionsCode0
Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation ExtractionCode0
The Knowref Coreference Corpus: Removing Gender and Number Cues for Difficult Pronominal Anaphora ResolutionCode0
CITE: A Corpus of Image-Text Discourse RelationsCode0
From Recognition to Prediction: Leveraging Sequence Reasoning for Action AnticipationCode0
The Interplay between Lexical Resources and Natural Language ProcessingCode0
PaCo: Preconditions Attributed to Commonsense KnowledgeCode0
Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language ModelsCode0
Zero-Shot Information Extraction to Enhance a Knowledge Graph Describing Silk TextilesCode0
Visual Question Answering using Deep Learning: A Survey and Performance AnalysisCode0
CORECODE: A Common Sense Annotated Dialogue Dataset with Benchmark Tasks for Chinese Large Language ModelsCode0
How Reasonable are Common-Sense Reasoning Tasks: A Case-Study on the Winograd Schema Challenge and SWAGCode0
Show:102550
← PrevPage 19 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified