SOTAVerified

Logical Fallacies

Papers

Showing 125 of 34 papers

TitleStatusHype
OlympiadBench: A Challenging Benchmark for Promoting AGI with Olympiad-Level Bilingual Multimodal Scientific ProblemsCode2
Scaling Language Models: Methods, Analysis & Insights from Training GopherCode2
Robust and Explainable Identification of Logical Fallacies in Natural Language ArgumentsCode1
Logical Fallacy DetectionCode1
Leveraging Context for Multimodal Fallacy Classification in Political DebatesCode0
Are Large Language Models Good at Detecting Propaganda?0
SLURG: Investigating the Feasibility of Generating Synthetic Online Fallacious Discourse0
Socrates or Smartypants: Testing Logic Reasoning Capabilities of Large Language Models with Logic Programming-based Test OraclesCode0
Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt FormulationCode0
RuozhiBench: Evaluating LLMs with Logical Fallacies and Misleading PremisesCode0
A Survey on Automatic Credibility Assessment of Textual Credibility Signals in the Era of Large Language Models0
Boosting Logical Fallacy Reasoning in LLMs via Logical Structure TreeCode0
ConceptAgent: LLM-Driven Precondition Grounding and Tree Search for Robust Task Planning and Execution0
CoCoLoFa: A Dataset of News Comments with Common Logical Fallacies Written by LLM-Assisted Crowds0
Grounding Fallacies Misrepresenting Scientific Publications in EvidenceCode0
A Logical Fallacy-Informed Framework for Argument GenerationCode0
Flee the Flaw: Annotating the Underlying Logic of Fallacious Arguments Through Templates and Slot-filling0
Can a Hallucinating Model help in Reducing Human "Hallucination"?0
Autoformalizing Natural Language to First-Order Logic: A Case Study in Logical Fallacy Detection0
Evaluation of an LLM in Identifying Logical Fallacies: A Call for Rigor When Adopting LLMs in HCI Research0
Reason from Fallacy: Enhancing Large Language Models' Logical Reasoning through Logical Fallacy Understanding0
A Closer Look at the Self-Verification Abilities of Large Language Models in Logical ReasoningCode0
Human Conditional Reasoning in Answer Set Programming0
A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models0
How susceptible are LLMs to Logical Fallacies?Code0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.