| Learning with privileged information via adversarial discriminative modality distillation | Oct 19, 2018 | Action RecognitionHallucination | CodeCode Available | 0 | 5 |
| Confidence Estimation for LLM-Based Dialogue State Tracking | Sep 15, 2024 | Dialogue State TrackingHallucination | CodeCode Available | 0 | 5 |
| Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified Robustness | Nov 13, 2024 | Adversarial RobustnessDenoising | CodeCode Available | 0 | 5 |
| Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models From Edge to Giant | Sep 17, 2024 | HallucinationInstruction Following | CodeCode Available | 0 | 5 |
| Learning on LLM Output Signatures for gray-box LLM Behavior Analysis | Mar 18, 2025 | Hallucination | CodeCode Available | 0 | 5 |
| Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs | Apr 11, 2024 | DescriptiveHallucination | CodeCode Available | 0 | 5 |
| Large Language Models on Wikipedia-Style Survey Generation: an Evaluation in NLP Concepts | Aug 21, 2023 | ArticlesHallucination | CodeCode Available | 0 | 5 |
| Learning Conformal Abstention Policies for Adaptive Risk Management in Large Language and Vision-Language Models | Feb 8, 2025 | Conformal PredictionDecision Making | CodeCode Available | 0 | 5 |
| Large Language Models Are Involuntary Truth-Tellers: Exploiting Fallacy Failure for Jailbreak Attacks | Jul 1, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 | 5 |
| Language Models Hallucinate, but May Excel at Fact Verification | Oct 23, 2023 | Fact VerificationHallucination | CodeCode Available | 0 | 5 |