SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 451500 of 939 papers

TitleStatusHype
Towards the Detection of a Semantic Gap in the Chain of Commonsense Knowledge Triples0
Expressive Scene Graph Generation Using Commonsense Knowledge Infusion for Visual Understanding and ReasoningCode1
An Informational Space Based Semantic Analysis for Scientific Texts0
Leveraging QA Datasets to Improve Generative Data AugmentationCode0
Large Language Models are Zero-Shot ReasonersCode2
A Survey on Semantics in Automated Data Science0
UL2: Unifying Language Learning ParadigmsCode1
Irony Detection for Dutch: a Venture into the Implicit0
Trans-KBLSTM: An External Knowledge Enhanced Transformer BiLSTM Model for Tabular Reasoning0
Identifying relevant common sense information in knowledge graphsCode0
Detecting COVID-19 Conspiracy Theories with Transformers and TF-IDF0
On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations0
A very preliminary analysis of DALL-E 20
PaLM: Scaling Language Modeling with PathwaysCode2
Training Compute-Optimal Large Language ModelsCode6
Learning to Detect Mobile Objects from LiDAR Scans Without LabelsCode1
STaR: Bootstrapping Reasoning With ReasoningCode2
AbductionRules: Training Transformers to Explain Unexpected InputsCode1
PACS: A Dataset for Physical Audiovisual CommonSense ReasoningCode1
SimAN: Exploring Self-Supervised Representation Learning of Scene Text via Similarity-Aware NormalizationCode1
Deep Unsupervised Hashing with Latent Semantic Components0
K-VQG: Knowledge-aware Visual Question Generation for Common-sense Acquisition0
Efficient Language Modeling with Sparse all-MLP0
Resolving label uncertainty with implicit posterior modelsCode1
Embarrassingly Simple Performance Prediction for Abductive Natural Language InferenceCode0
ST-MoE: Designing Stable and Transferable Sparse Expert ModelsCode3
Integration of knowledge and data in machine learning0
Russian SuperGLUE 1.1: Revising the Lessons not Learned by Russian NLP models0
Neural NID Rules0
NEWSKVQA: Knowledge-Aware News Video Question Answering0
A Dataset for Interactive Vision-Language Navigation with Unknown Command FeasibilityCode1
Chain-of-Thought Prompting Elicits Reasoning in Large Language ModelsCode6
An Application of Pseudo-Log-Likelihoods to Natural Language Scoring0
Evaluating Machine Common Sense via Cloze Testing0
COPA-SSE: Semi-structured Explanations for Commonsense ReasoningCode0
Combining Fast and Slow Thinking for Human-like and Efficient Navigation in Constrained Environments0
Unsupervised Common Sense Relation Extraction0
On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations0
CommonsenseQA 2.0: Exposing the Limits of AI through Gamification0
Towards Automated Error Analysis: Learning to Characterize Errors0
AI and the Sense of Self0
Building Human-like Communicative Intelligence: A Grounded Perspective0
Toward a New Science of Common Sense0
Reflash Dropout in Image Super-ResolutionCode1
Comprehensive Visual Question Answering on Point Clouds through Compositional Scene ManipulationCode1
The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web CorpusCode1
Is "My Favorite New Movie" My Favorite Movie? Probing the Understanding of Recursive Noun PhrasesCode0
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts0
Native Chinese Reader: A Dataset Towards Native-Level Chinese Machine Reading Comprehension0
Improving and Diagnosing Knowledge-Based Visual Question Answering via Entity Enhanced Knowledge Injection0
Show:102550
← PrevPage 10 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified