SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 150 of 939 papers

TitleStatusHype
Qwen2.5 Technical ReportCode13
LLaMA: Open and Efficient Foundation Language ModelsCode7
Mamba: Linear-Time Sequence Modeling with Selective State SpacesCode6
Mistral 7BCode6
AWQ: Activation-aware Weight Quantization for LLM Compression and AccelerationCode6
Pythia: A Suite for Analyzing Large Language Models Across Training and ScalingCode6
GPT-4 Technical ReportCode6
Training Compute-Optimal Large Language ModelsCode6
Chain-of-Thought Prompting Elicits Reasoning in Large Language ModelsCode6
Cosmos-Reason1: From Physical Common Sense To Embodied ReasoningCode4
WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image GenerationCode4
Gated Delta Networks: Improving Mamba2 with Delta RuleCode4
Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language ModelsCode4
G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question AnsweringCode4
Knowledge Fusion of Large Language ModelsCode4
Mixtral of ExpertsCode4
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-ShotCode4
Galactica: A Large Language Model for ScienceCode4
N-Grammer: Augmenting Transformers with latent n-gramsCode4
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language modelsCode4
AlphaDrive: Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and ReasoningCode3
CityWalker: Learning Embodied Urban Navigation from Web-Scale VideosCode3
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of ExpertsCode3
Common Sense Reasoning for Deepfake DetectionCode3
Generative agent-based modeling with actions grounded in physical, social, or digital space using ConcordiaCode3
Reasoning with Language Model Prompting: A SurveyCode3
ST-MoE: Designing Stable and Transferable Sparse Expert ModelsCode3
Finetuned Language Models Are Zero-Shot LearnersCode3
Language Models are Few-Shot LearnersCode3
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingCode3
PrefixQuant: Eliminating Outliers by Prefixed Tokens for Large Language Models QuantizationCode2
RegMix: Data Mixture as Regression for Language Model Pre-trainingCode2
Extended Mind TransformersCode2
Easy Problems That LLMs Get WrongCode2
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision ModelsCode2
OpenFMNav: Towards Open-Set Zero-Shot Object Navigation via Vision-Language Foundation ModelsCode2
Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General TasksCode2
Holodeck: Language Guided Generation of 3D Embodied AI EnvironmentsCode2
On the Road with GPT-4V(ision): Early Explorations of Visual-Language Model on Autonomous DrivingCode2
LLM-FP4: 4-Bit Floating-Point Quantized TransformersCode2
DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language ModelsCode2
PointLLM: Empowering Large Language Models to Understand Point CloudsCode2
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language ModelsCode2
Drive Like a Human: Rethinking Autonomous Driving with Large Language ModelsCode2
GPT4RoI: Instruction Tuning Large Language Model on Region-of-InterestCode2
Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and MemoryCode2
LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language ModelsCode2
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-TuningCode2
Causal Reasoning and Large Language Models: Opening a New Frontier for CausalityCode2
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale InstructionsCode2
Show:102550
← PrevPage 1 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified