SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 110 of 939 papers

TitleStatusHype
Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes0
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and OptimizationCode0
CheckManual: A New Challenge and Benchmark for Manual-based Appliance Manipulation0
EditInspector: A Benchmark for Evaluation of Text-Guided Image Edits0
Prime the search: Using large language models for guiding geometric task and motion planning by warm-starting tree searchCode0
AmbiK: Dataset of Ambiguous Tasks in Kitchen EnvironmentCode0
ATLAS: Learning to Optimally Memorize the Context at Test Time0
Spatial Knowledge Graph-Guided Multimodal Synthesis0
CaseEdit: Enhancing Localized Commonsense Reasoning via Null-Space Constrained Knowledge Editing in Small Parameter Language Models0
Align-GRAG: Reasoning-Guided Dual Alignment for Graph Retrieval-Augmented Generation0
Show:102550
← PrevPage 1 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PaLM 2 (few-shot, k=3, Direct)Accuracy78.8Unverified
2PaLM 2 (few-shot, k=3, CoT)Accuracy77.6Unverified
3PaLM 540B (few-shot, k=3)Accuracy60.8Unverified
4Chinchilla-70B (few-shot, k=5)Accuracy54.7Unverified
5Gopher-280B (few-shot, k=5)Accuracy45.5Unverified
6GPT-NeoX 20B (few-shot, k=3)Accuracy40.8Unverified
7OPT 66B (few-shot, k=3)Accuracy40.4Unverified
8BLOOM 176B (few-shot, k=3)Accuracy40.4Unverified
9Bloomberg GPT 50B (few-shot, k=3)Accuracy34Unverified