SOTAVerified

Logical Reasoning

Papers

Showing 251300 of 747 papers

TitleStatusHype
Story3D-Agent: Exploring 3D Storytelling Visualization with Large Language Models0
SarcasmBench: Towards Evaluating Large Language Models on Sarcasm Understanding0
CHECKWHY: Causal Fact Verification via Argument StructureCode1
A Mechanistic Interpretation of Syllogistic Reasoning in Auto-Regressive Language Models0
Large Language Models Might Not Care What You Are Saying: Prompt Format Beats Descriptions0
LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image0
Can Large Language Models Reason? A Characterization via 3-SAT0
P3: A Policy-Driven, Pace-Adaptive, and Diversity-Promoted Framework for data pruning in LLM Training0
Exploring Reasoning Biases in Large Language Models Through Syllogism: Insights from the NeuBAROCO DatasetCode0
Automated Theorem Provers Help Improve Large Language Model Reasoning0
Lifelong Personalized Low-Rank Adaptation of Large Language Models for Recommendation0
Leveraging Large Language Models with Chain-of-Thought and Prompt Engineering for Traffic Crash Severity Analysis and Inference0
Deceptive AI systems that give explanations are more convincing than honest AI systems and can amplify belief in misinformation0
CLR-Fact: Evaluating the Complex Logical Reasoning Capability of Large Language Models over Factual Knowledge0
Take A Step Back: Rethinking the Two Stages in Visual Reasoning0
Logic Distillation: Learning from Code Function by Function for Planning and Decision-making0
An Empirical Study of Retrieval Augmented Generation with Chain-of-Thought0
Step-by-Step Reasoning to Solve Grid Puzzles: Where do LLMs Falter?Code0
An Explainable Fast Deep Neural Network for Emotion Recognition0
NeedleBench: Can LLMs Do Retrieval and Reasoning in Information-Dense Context?Code9
Leveraging large language models for nano synthesis mechanism explanation: solid foundations or mere conjectures?Code0
Hypergraph Multi-modal Large Language Model: Exploiting EEG and Eye-tracking Modalities to Evaluate Heterogeneous Responses for Video UnderstandingCode1
Analyzing Large language models chatbots: An experimental approach using a probability test0
Why should we ever automate moral decision making?0
R^2-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical ReasoningCode1
ElecBench: a Power Dispatch Evaluation Benchmark for Large Language ModelsCode1
LogicVista: Multimodal LLM Logical Reasoning Benchmark in Visual ContextsCode1
Are Large Language Models Strategic Decision Makers? A Study of Performance and Bias in Two-Player Non-Zero-Sum Games0
Unveiling Scoring Processes: Dissecting the Differences between LLMs and Human Graders in Automatic Scoring0
PUZZLES: A Benchmark for Neural Algorithmic ReasoningCode1
Scaling Synthetic Data Creation with 1,000,000,000 PersonasCode11
FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts0
Categorical Syllogisms Revisited: A Review of the Logical Reasoning Abilities of LLMs for Analyzing Categorical Syllogism0
LLM-ARC: Enhancing LLMs with an Automated Reasoning Critic0
Multi-LogiEval: Towards Evaluating Multi-Step Logical Reasoning Ability of Large Language ModelsCode0
Large Language Models Are Cross-Lingual Knowledge-Free ReasonersCode0
Imperative Learning: A Self-supervised Neuro-Symbolic Learning Framework for Robot Autonomy0
Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference0
Pathformer: Recursive Path Query Encoding for Complex Logical Query Answering0
The neural correlates of logical-mathematical symbol systems processing resemble that of spatial cognition more than natural language processing0
Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language ModelsCode0
VideoVista: A Versatile Benchmark for Video Understanding and ReasoningCode1
Program Synthesis Benchmark for Visual Programming in XLogoOnline Environment0
Scaling Synthetic Logical Reasoning Datasets with Context-Sensitive Declarative GrammarsCode0
City-LEO: Toward Transparent City Management Using LLM with End-to-End Optimization0
Ontology Embedding: A Survey of Methods, Applications and ResourcesCode2
A Peek into Token Bias: Large Language Models Are Not Yet Genuine ReasonersCode1
Evaluating ChatGPT-4 Vision on Brazil's National Undergraduate Computer Science ExamCode0
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMsCode2
Large Language Models are Limited in Out-of-Context Knowledge ReasoningCode0
Show:102550
← PrevPage 6 of 15Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Claude OpusDelta_NoContext28.8Unverified
2GPT-4oDelta_NoContext25.1Unverified
3Gemini 1.5 ProDelta_NoContext23.4Unverified
4GPT-4Delta_NoContext21.5Unverified
5Command R+Delta_NoContext11.6Unverified
6GPT-3.5Delta_NoContext11.2Unverified
7Mixtral 8x7BDelta_NoContext6.4Unverified
8Llama 3 8BDelta_NoContext4.9Unverified
9Llama 3 70BDelta_NoContext2.9Unverified
10Gemma 7BDelta_NoContext2.2Unverified
#ModelMetricClaimedVerifiedStatus
1PaLM 2 (few-shot, k=3, Direct)Accuracy64.8Unverified
2PaLM 2 (few-shot, k=3, CoT)Accuracy57.2Unverified
3OPT 66B (few-shot, k=3)Accuracy54Unverified
4PaLM 540B (few-shot, k=3)Accuracy53.6Unverified
5GPT-NeoX 20B (few-shot, k=3)Accuracy52.8Unverified
6BLOOM 176B (few-shot, k=3)Accuracy52.8Unverified
7Chinchilla-70B (few-shot, k=5)Accuracy52.1Unverified
8Bloomberg GPT 50B (few-shot, k=3)Accuracy50.8Unverified
9Gopher-280B (few-shot, k=5)Accuracy50.7Unverified
#ModelMetricClaimedVerifiedStatus
1PaLM 2 (few-shot, k=3, CoT)Accuracy84.9Unverified
2PaLM 2 (few-shot, k=3, Direct)Accuracy65.8Unverified
3Chinchilla-70B (few-shot, k=5)Accuracy48.7Unverified
4PaLM 540B (few-shot, k=3)Accuracy44.5Unverified
5Gopher-280B (few-shot, k=5)Accuracy40.6Unverified
6BLOOM 176B (few-shot, k=3)Accuracy40.41Unverified
7Bloomberg GPT (few-shot, k=3)Accuracy37.67Unverified
8GPT-NeoX (few-shot, k=3)Accuracy33.56Unverified
9OPT 66B (few-shot, k=3)Accuracy28.08Unverified
#ModelMetricClaimedVerifiedStatus
1PaLM 2 (few-shot, k=3, CoT)Accuracy91.2Unverified
2PaLM 2 (few-shot, k=3, Direct)Accuracy61.2Unverified
3Chinchilla-70B (few-shot, k=5)Accuracy59.7Unverified
4Gopher-280B (few-shot, k=5)Accuracy49.2Unverified
5PaLM 540B (few-shot, k=3)Accuracy38Unverified
6BLOOM 176B (few-shot, k=3)Accuracy36.8Unverified
7Bloomberg GPT (few-shot, k=3)Accuracy34.8Unverified
8OPT 66B (few-shot, k=3)Accuracy31.2Unverified
9GPT-NeoX (few-shot, k=3)Accuracy26Unverified
#ModelMetricClaimedVerifiedStatus
1PaLM 2 (few-shot, k=3, CoT)Accuracy100Unverified
2PaLM 2 (few-shot, k=3, Direct)Accuracy96.4Unverified
3PaLM 540B (few-shot, k=3)Accuracy39.6Unverified
4BLOOM 176B (few-shot, k=3)Accuracy36.8Unverified
5Chinchilla-70B (few-shot, k=5)Accuracy32Unverified
6Bloomberg GPT (few-shot, k=3)Accuracy29.2Unverified
7OPT 66B (few-shot, k=3)Accuracy23.6Unverified
8GPT-NeoX (few-shot, k=3)Accuracy21.2Unverified
9Gopher-280B (few-shot, k=5)Accuracy19Unverified
#ModelMetricClaimedVerifiedStatus
1Chinchilla-70B (few-shot, k=5)Accuracy44Unverified
2PaLM-540B (few-shot, k=5)Accuracy42.4Unverified
3PaLM-62B (few-shot, k=5)Accuracy36.5Unverified
4Gopher-280B (few-shot, k=5)Accuracy35.1Unverified
#ModelMetricClaimedVerifiedStatus
1PaLM-540B (few-shot, k=5)Accuracy73.9Unverified
2Chinchilla-70B (few-shot, k=5)Accuracy68.3Unverified
3PaLM-62B (few-shot, k=5)Accuracy65.4Unverified
4Gopher-280B (few-shot, k=5)Accuracy61Unverified
#ModelMetricClaimedVerifiedStatus
1Human benchmarkAccuracy 83.7Unverified
2RuGPT-3 LargeAccuracy 40.7Unverified
3RuGPT-3 MediumAccuracy 38Unverified
4RuGPT-3 SmallAccuracy 34Unverified
#ModelMetricClaimedVerifiedStatus
1Human benchmarkAccuracy87Unverified
2RuGPT-3 SmallAccuracy57.9Unverified
3RuGPT-3 MediumAccuracy57.2Unverified
4RuGPT-3 LargeAccuracy55.5Unverified
#ModelMetricClaimedVerifiedStatus
1Chinchilla-70B (few-shot, k=5)Accuracy72.1Unverified
2Gopher-280B (few-shot, k=5)Accuracy58.9Unverified