SOTAVerified

Hallucination

Papers

Showing 17261750 of 1816 papers

TitleStatusHype
Assessing the Reliability of Large Language Model KnowledgeCode0
Are Large Language Models Good at Utility Judgments?Code0
Enhancing Hallucination Detection through Perturbation-Based Synthetic Data Generation in System ResponsesCode0
Pushing the Limits of Low-Resource Morphological InflectionCode0
Embedding Hallucination for Few-Shot Language Fine-tuningCode0
A Probabilistic Framework for LLM Hallucination Detection via Belief Tree PropagationCode0
AIGCs Confuse AI Too: Investigating and Explaining Synthetic Image-induced Hallucinations in Large Vision-Language ModelsCode0
Verbosity Veracity: Demystify Verbosity Compensation Behavior of Large Language ModelsCode0
CiteBART: Learning to Generate Citations for Local Citation RecommendationCode0
Characterizing Multimodal Long-form Summarization: A Case Study on Financial ReportsCode0
Appraising the Potential Uses and Harms of LLMs for Medical Systematic ReviewsCode0
Elevating Legal LLM Responses: Harnessing Trainable Logical Structures and Semantic Knowledge with Legal ReasoningCode0
Anticipation-Free Training for Simultaneous Machine TranslationCode0
Qwen Look Again: Guiding Vision-Language Reasoning Models to Re-attention Visual InformationCode0
Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models From Edge to GiantCode0
Handwritten Code Recognition for Pen-and-Paper CS EducationCode0
Handling Ontology Gaps in Semantic ParsingCode0
Efficient and Interpretable Compressive Text Summarisation with Unsupervised Dual-Agent Reinforcement LearningCode0
Synthetic Imitation Edit Feedback for Factual Alignment in Clinical SummarizationCode0
Treble Counterfactual VLMs: A Causal Approach to HallucinationCode0
Effectively Enhancing Vision Language Large Models by Prompt Augmentation and Caption UtilizationCode0
HaluEval-Wild: Evaluating Hallucinations of Language Models in the WildCode0
TreeCut: A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination EvaluationCode0
ELOQ: Resources for Enhancing LLM Detection of Out-of-Scope QuestionsCode0
Visually Dehallucinative Instruction GenerationCode0
Show:102550
← PrevPage 70 of 73Next →

No leaderboard results yet.