SOTAVerified

Common Sense Reasoning

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Papers

Showing 851900 of 939 papers

TitleStatusHype
Editing Common Sense in TransformersCode0
DynMoLE: Boosting Mixture of LoRA Experts Fine-Tuning with a Hybrid Routing MechanismCode0
Being Right for Whose Right Reasons?Code0
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and OptimizationCode0
Incorporating Chinese Characters of Words for Lexical Sememe PredictionCode0
AmbiK: Dataset of Ambiguous Tasks in Kitchen EnvironmentCode0
LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model CompressionCode0
Don't Fight Hallucinations, Use Them: Estimating Image Realism using NLI over Atomic FactsCode0
AILS-NTUA at SemEval-2024 Task 9: Cracking Brain Teasers: Transformer Models for Lateral Thinking PuzzlesCode0
PredictaBoard: Benchmarking LLM Score PredictabilityCode0
Titans: Learning to Memorize at Test TimeCode0
An Evaluation of PredPatt and Open IE via Stage 1 Semantic Role LabelingCode0
Improving Sample Efficiency of Reinforcement Learning with Background Knowledge from Large Language ModelsCode0
Do Machine Learning Models Learn Statistical Rules Inferred from Data?Code0
Self-Refined Large Language Model as Automated Reward Function Designer for Deep Reinforcement Learning in RoboticsCode0
Do Language Models Understand Morality? Towards a Robust Detection of Moral ContentCode0
Improving Neural Story Generation by Targeted Common Sense GroundingCode0
Telling Stories for Common Sense Zero-Shot Action RecognitionCode0
A Group-Specific Approach to NLP for Hate Speech DetectionCode0
Prime the search: Using large language models for guiding geometric task and motion planning by warm-starting tree searchCode0
A Neural Conversational ModelCode0
Temporal Relational Reasoning in VideosCode0
MEMEX: Detecting Explanatory Evidence for Memes via Knowledge-Enriched ContextualizationCode0
Improved Word Representation Learning with SememesCode0
Identifying relevant common sense information in knowledge graphsCode0
Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings Using Abstract ScenesCode0
BAMO at SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common SenseCode0
Hybrid Reasoning Based on Large Language Models for Autonomous Car DrivingCode0
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language ModelsCode0
ACCORD: Closing the Commonsense Measurability GapCode0
A Survey of Video Datasets for Grounded Event UnderstandingCode0
Human-AI collectives produce the most accurate differential diagnosesCode0
Acquiring Common Sense Spatial Knowledge through Implicit Spatial TemplatesCode0
DKN: Deep Knowledge-Aware Network for News RecommendationCode0
HL Dataset: Visually-grounded Description of Scenes, Actions and RationalesCode0
Hierarchical Spatial Proximity Reasoning for Vision-and-Language NavigationCode0
Mixture-of-Subspaces in Low-Rank AdaptationCode0
Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question AnsweringCode0
That and There: Judging the Intent of Pointing Actions with Robotic ArmsCode0
DiffG-RL: Leveraging Difference between State and Common SenseCode0
Detecting Persuasive Atypicality by Modeling Contextual CompatibilityCode0
Modeling Event Plausibility with Consistent Conceptual AbstractionCode0
QASC: A Dataset for Question Answering via Sentence CompositionCode0
CODAH: An Adversarially-Authored Question Answering Dataset for Common SenseCode0
SimpleMind adds thinking to deep neural networksCode0
Modeling User Exposure in RecommendationCode0
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational ComplexityCode0
QUENCH: Measuring the gap between Indic and Non-Indic Contextual General Reasoning in LLMsCode0
Deliberative and Conceptual Inference in Service RobotsCode0
Morph Call: Probing Morphosyntactic Content of Multilingual TransformersCode0
Show:102550
← PrevPage 18 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
2Unicorn 11B (fine-tuned)Accuracy91.3Unverified
3CompassMTL 567M with TailorAccuracy90.5Unverified
4CompassMTL 567MAccuracy89.6Unverified
5UnifiedQA 11B (fine-tuned)Accuracy89.4Unverified
6Claude 3 Opus (5-shot)Accuracy88.5Unverified
7GPT-4 (5-shot)Accuracy87.5Unverified
8ExDeBERTa 567MAccuracy87Unverified
9LLaMA-2 13B + MixLoRAAccuracy86.3Unverified
10LLaMA3 8B+MoSLoRAAccuracy85.8Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (few-shot, k=25)Accuracy96.4Unverified
2PaLM 2 (few-shot, CoT, SC)Accuracy95.1Unverified
3Shivaay (4B, few-shot, k=8)Accuracy91.04Unverified
4StupidLLMAccuracy91.03Unverified
5Claude 2 (few-shot, k=5)Accuracy91Unverified
6Claude 1.3 (few-shot, k=5)Accuracy90Unverified
7PaLM 540B (Self Improvement, Self Consistency)Accuracy89.8Unverified
8PaLM 540B (Self Consistency)Accuracy88.7Unverified
9PaLM 540B (Self Improvement, CoT Prompting)Accuracy88.3Unverified
10PaLM 540B (Self Improvement, Standard-Prompting)Accuracy87.2Unverified
#ModelMetricClaimedVerifiedStatus
1ST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
2LLaMA 3 8B+MoSLoRA (fine-tuned)Accuracy90.5Unverified
3PaLM 2-L (1-shot)Accuracy89.7Unverified
4PaLM 2-M (1-shot)Accuracy88Unverified
5LLaMA-3 8B + MixLoRAAccuracy86.5Unverified
6Camelidae-8×34BAccuracy86.2Unverified
7PaLM 2-S (1-shot)Accuracy85.6Unverified
8LLaMA 65B + CFG (0-shot)Accuracy84.2Unverified
9GAL 120B (0-shot)Accuracy83.8Unverified
10LLaMA-2 13B + MixLoRAAccuracy83.5Unverified
#ModelMetricClaimedVerifiedStatus
1Turing NLR v5 XXL 5.4B (fine-tuned)EM95.9Unverified
2ST-MoE-32B 269B (fine-tuned)EM95.1Unverified
3T5-11BF194.1Unverified
4DeBERTa-1.5BEM94.1Unverified
5PaLM 540B (finetuned)EM94Unverified
6Vega v2 6B (fine-tuned)EM93.9Unverified
7PaLM 2-L (one-shot)F193.8Unverified
8T5-XXL 11B (fine-tuned)EM93.4Unverified
9PaLM 2-M (one-shot)F192.4Unverified
10PaLM 2-S (one-shot)F192.1Unverified