SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 601650 of 939 papers

TitleStatusHype
TR at SemEval-2020 Task 4: Exploring the Limits of Language-model-based Common Sense Validation0
NLP@JUST at SemEval-2020 Task 4: Ensemble Technique for BERT and Roberta to Evaluate Commonsense Validation0
TeamJUST at SemEval-2020 Task 4: Commonsense Validation and Explanation Using Ensembling Techniques0
BLCU-NLP at SemEval-2020 Task 5: Data Augmentation for Efficient Counterfactual Detecting0
DEEPYANG at SemEval-2020 Task 4: Using the Hidden Layer State of BERT Model for Differentiating Common Sense0
Mxgra at SemEval-2020 Task 4: Common Sense Making with Next Token Prediction0
UoR at SemEval-2020 Task 4: Pre-trained Sentence Transformer Models for Commonsense Validation and Explanation0
SSN-NLP at SemEval-2020 Task 4: Text Classification and Generation on Common Sense Context Using Neural Networks0
Zero-Shot Calibration of Fisheye Cameras0
Tackling Domain-Specific Winograd Schemas with Knowledge-Based Reasoning and Machine LearningCode0
Zero-Shot Learning with Knowledge Enhanced Visual Semantic Embeddings0
Generating Natural Questions from Images for Multimodal Assistants0
iPerceive: Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering0
An Analysis of Dataset Overlap on Winograd-Style TasksCode0
Machine Reasoning: Technology, Dilemma and Future0
Thinking Like a Skeptic: Defeasible Inference in Natural LanguageCode1
Representation, Learning and Reasoning on Spatial Language for Downstream NLP Tasks0
ConceptBert: Concept-Aware Representation for Visual Question AnsweringCode1
Learning Physical Common Sense as Knowledge Graph Completion via BERT Data Augmentation and Constrained Tucker Factorization0
A Knowledge-Aware Sequence-to-Tree Network for Math Word Problem Solving0
RussianSuperGLUE: A Russian Language Understanding Evaluation BenchmarkCode1
Dutch Humor Detection by Generating Negative Examples0
Pre-training Text-to-Text Transformers for Concept-centric Common SenseCode1
GO FIGURE: A Meta Evaluation of Factuality in Summarization0
mT5: A massively multilingual pre-trained text-to-text transformerCode1
Thinking Fast and Slow in AI0
Do Language Embeddings Capture Scales?0
Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and BaselinesCode1
Hierarchical Relational Inference0
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attentionCode1
Creative Captioning: An AI Grand Challenge Based on the Dixit Board Game0
Zero-Shot Learning with Common Sense Knowledge Graphs0
Generating similes effortlessly like a Pro: A Style Transfer Approach for Simile GenerationCode1
Finding Effective Security Strategies through Reinforcement Learning and Self-PlayCode1
Multi-modal Cooking Workflow Construction for Food Recipes0
Commonsense Knowledge in Wikidata0
Learning Long-term Visual Dynamics with Region Proposal Interaction NetworksCode1
Learning Object Placement by Inpainting for Compositional Data Augmentation0
Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the WildCode1
CS-NET at SemEval-2020 Task 4: Siamese BERT for ComVECode0
Understanding Spatial Relations through Multiple Modalities0
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack0
Evaluation Toolkit For Robustness Testing Of Automatic Essay Scoring SystemsCode1
Robustness to Spurious Correlations via Human AnnotationsCode0
Explainable Inference on Sequential Data via Memory-TrackingCode0
LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation using Pretraining Language Model0
Event2Mind for Russian: Understanding Emotions and Intents in Texts. Corpus and Model for EvaluationCode1
Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoderCode1
Machine Common Sense0
CUHK at SemEval-2020 Task 4: CommonSense Explanation, Reasoning and Prediction with Multi-task Learning0
Show:102550
← PrevPage 13 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified