SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 301350 of 939 papers

TitleStatusHype
Addressing Image Hallucination in Text-to-Image Generation through Factual Image Retrieval0
AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies0
Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs0
Improving Sample Efficiency of Reinforcement Learning with Background Knowledge from Large Language ModelsCode0
Automatic Adaptation Rule Optimization via Large Language Models0
Whispering Experts: Neural Interventions for Toxicity Mitigation in Language Models0
Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving0
Large Language Models are Zero-Shot Recognizers for Activities of Daily Living0
Human-Object Interaction from Human-Level Instructions0
MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception0
Human-AI collectives produce the most accurate differential diagnosesCode0
P-TA: Using Proximal Policy Optimization to Enhance Tabular Data Augmentation via Large Language Models0
Mixture-of-Subspaces in Low-Rank AdaptationCode0
A Survey of Video Datasets for Grounded Event UnderstandingCode0
LLM-Driven Robots Risk Enacting Discrimination, Violence, and Unlawful Actions0
BAMO at SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common SenseCode0
Think out Loud: Emotion Deducing Explanation in Dialogues0
RoboMamba: Efficient Vision-Language-Action Model for Robotic Reasoning and Manipulation0
Generative AI-in-the-loop: Integrating LLMs and GPTs into the Next Generation Networks0
mCSQA: Multilingual Commonsense Reasoning Dataset with Unified Creation Strategy by Language Models and Humans0
Every Answer Matters: Evaluating Commonsense with Probabilistic MeasuresCode0
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models0
Do Language Models Understand Morality? Towards a Robust Detection of Moral ContentCode0
ACCORD: Closing the Commonsense Measurability GapCode0
RAG-based Crowdsourcing Task Decomposition via Masked Contrastive Learning with Prompts0
Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems0
Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search0
Picturing Ambiguity: A Visual Twist on the Winograd Schema Challenge0
iREL at SemEval-2024 Task 9: Improving Conventional Prompting Methods for Brain TeasersCode0
Regressor-free Molecule Generation to Support Drug Response Prediction0
Large Language Models are Effective Priors for Causal Graph Discovery0
FiDeLiS: Faithful Reasoning in Large Language Model for Knowledge Graph Question Answering0
DaVinci at SemEval-2024 Task 9: Few-shot prompting GPT-3.5 for Unconventional Reasoning0
Meta-Control: Automatic Model-based Control Synthesis for Heterogeneous Robot Skills0
Soft Label PU Learning0
The Power of Question Translation Training in Multilingual Reasoning: Broadened Scope and Deepened Insights0
FoundaBench: Evaluating Chinese Fundamental Knowledge Capabilities of Large Language Models0
Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G0
Student Data Paradox and Curious Case of Single Student-Tutor Model: Regressive Side Effects of Training LLMs for Personalized Learning0
SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense0
Concept Induction using LLMs: a user experiment for assessment0
CorrespondentDream: Enhancing 3D Fidelity of Text-to-3D using Cross-View Correspondences0
Deep Reinforcement Learning-Based Approach for a Single Vehicle Persistent Surveillance Problem with Fuel Constraints0
DELTA: Decomposed Efficient Long-Term Robot Task Planning using Large Language Models0
Unveiling LLMs: The Evolution of Latent Representations in a Dynamic Knowledge GraphCode0
Stereotype Detection in LLMs: A Multiclass, Explainable, and Benchmark-Driven Approach0
Detect2Interact: Localizing Object Key Field in Visual Question Answering (VQA) with LLMs0
AILS-NTUA at SemEval-2024 Task 9: Cracking Brain Teasers: Transformer Models for Lateral Thinking PuzzlesCode0
ITCMA: A Generative Agent Based on a Computational Consciousness Structure0
LC-LLM: Explainable Lane-Change Intention and Trajectory Predictions with Large Language Models0
Show:102550
← PrevPage 7 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified