| TruthFlow: Truthful LLM Generation via Representation Flow Correction | Feb 6, 2025 | HallucinationTruthfulQA | —Unverified | 0 |
| Large Language Models for Multi-Robot Systems: A Survey | Feb 6, 2025 | Action GenerationBenchmarking | CodeCode Available | 1 |
| Enhancing Hallucination Detection through Noise Injection | Feb 6, 2025 | Hallucination | —Unverified | 0 |
| The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information Steering | Feb 5, 2025 | Hallucination | CodeCode Available | 2 |
| A Schema-Guided Reason-while-Retrieve framework for Reasoning on Scene Graphs with Large-Language-Models (LLMs) | Feb 5, 2025 | HallucinationSpatial Reasoning | —Unverified | 0 |
| DAMO: Data- and Model-aware Alignment of Multi-modal LLMs | Feb 4, 2025 | Hallucination | CodeCode Available | 1 |
| Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration | Feb 4, 2025 | AttributeHallucination | —Unverified | 0 |
| Eliciting Language Model Behaviors with Investigator Agents | Feb 3, 2025 | Bayesian InferenceHallucination | —Unverified | 0 |
| SelfCheckAgent: Zero-Resource Hallucination Detection in Generative Large Language Models | Feb 3, 2025 | Hallucination | —Unverified | 0 |
| MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation | Feb 3, 2025 | BenchmarkingFairness | —Unverified | 0 |