SOTAVerified

Hallucination

Papers

Showing 10511100 of 1816 papers

TitleStatusHype
Lean Copilot: Large Language Models as Copilots for Theorem Proving in LeanCode5
Is There No Such Thing as a Bad Question? H4R: HalluciBot For Ratiocination, Rewriting, Ranking, and Routing0
Can We Catch the Elephant? A Survey of the Evolvement of Hallucination Evaluation on Natural Language Generation0
MemLLM: Finetuning LLMs to Use An Explicit Read-Write MemoryCode1
AI-Enhanced Cognitive Behavioral Therapy: Deep Learning and Large Language Models for Extracting Cognitive Pathways from Social Media TextsCode0
Exploring the Transferability of Visual Prompting for Multimodal Large Language ModelsCode1
Fact :Teaching MLLMs with Faithful, Concise and Transferable Rationales0
Fewer Truncations Improve Language Modeling0
A computational account of the development and evolution of psychotic symptoms0
Prescribing the Right Remedy: Mitigating Hallucinations in Large Vision-Language Models via Targeted Instruction Tuning0
Reasoning on Efficient Knowledge Paths:Knowledge Graph Guides Large Language Model for Domain Question Answering0
Anatomy of Industrial Scale Multilingual ASR0
Constructing Benchmarks and Interventions for Combating Hallucinations in LLMsCode1
Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for HallucinationsCode1
Mitigating Hallucination in Abstractive Summarization with Domain-Conditional Mutual InformationCode0
Harnessing GPT-4V(ision) for Insurance: A Preliminary ExplorationCode1
Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models0
Distilling Reasoning Ability from Large Language Models with Adaptive Thinking0
CuriousLLM: Elevating Multi-Document QA with Reasoning-Infused Knowledge Graph PromptingCode1
Reducing hallucination in structured outputs via Retrieval-Augmented Generation0
View Selection for 3D Captioning via Diffusion RankingCode3
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMsCode0
An Audit on the Perspectives and Challenges of Hallucinations in NLP0
MetaCheckGPT -- A Multi-task Hallucination Detector Using LLM Uncertainty and Meta-models0
BRAVE: Broadening the visual encoding of vision-language models0
Tackling Structural Hallucination in Image Translation with Local DiffusionCode1
Characterizing Multimodal Long-form Summarization: A Case Study on Financial ReportsCode0
SmurfCat at SemEval-2024 Task 6: Leveraging Synthetic Data for Hallucination DetectionCode0
Automating Research Synthesis with Domain-Specific Large Language Model Fine-Tuning0
Hyperbolic Learning with Synthetic Captions for Open-World Detection0
FGAIF: Aligning Large Vision-Language Models with Fine-grained AI Feedback0
HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large Foundation Models0
SLPL SHROOM at SemEval2024 Task 06: A comprehensive study on models ability to detect hallucinationCode0
On the Limitations of Large Language Models (LLMs): False Attribution0
PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition DynamicsCode0
FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping0
Mitigating LLM Hallucinations via Conformal Abstention0
SHROOM-INDElab at SemEval-2024 Task 6: Zero- and Few-Shot LLM-Based Classification for Hallucination DetectionCode0
Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM HallucinationsCode0
A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation0
KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual CheckingCode2
Scalable Model Editing via Customized Expert NetworksCode0
ALOHa: A New Measure for Hallucination in Captioning Models0
Comparative Study of Domain Driven Terms Extraction Using Large Language Models0
Extracting Norms from Contracts Via ChatGPT: Opportunities and Challenges0
Hallucination Diversity-Aware Active Learning for Text Summarization0
AILS-NTUA at SemEval-2024 Task 6: Efficient model tuning for hallucination detection and analysisCode0
Exploring and Evaluating Hallucinations in LLM-Powered Code Generation0
Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch ReasoningCode0
On Large Language Models' Hallucination with Regard to Known FactsCode0
Show:102550
← PrevPage 22 of 37Next →

No leaderboard results yet.