SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 851900 of 10817 papers

TitleStatusHype
Tracking the Copyright of Large Vision-Language Models through Parameter Learning Adversarial Images0
MQADet: A Plug-and-Play Paradigm for Enhancing Open-Vocabulary Object Detection via Multimodal Question Answering0
Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models0
Echo: A Large Language Model with Temporal Episodic Memory0
EPERM: An Evidence Path Enhanced Reasoning Model for Knowledge Graph Question and Answering0
Wrong Answers Can Also Be Useful: PlausibleQA -- A Large-Scale QA Dataset with Answer Plausibility ScoresCode0
MHQA: A Diverse, Knowledge Intensive Mental Health Question Answering Challenge for Language Models0
TransMamba: Fast Universal Architecture Adaption from Transformers to Mamba0
Chats-Grid: An Iterative Retrieval Q&A Optimization Scheme Leveraging Large Model and Retrieval Enhancement Generation in smart grid0
Empowering LLMs with Logical Reasoning: A Comprehensive Survey0
Improving Consistency in Large Language Models through Chain of GuidanceCode0
KVLink: Accelerating Large Language Models via Efficient KV Cache ReuseCode1
Mind the Gap! Static and Interactive Evaluations of Large Audio Models0
Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?0
Directional Gradient Projection for Robust Fine-Tuning of Foundation Models0
Is Relevance Propagated from Retriever to Generator in RAG?0
On the Influence of Context Size and Model Choice in Retrieval-Augmented Generation SystemsCode0
How to Get Your LLM to Generate Challenging Problems for EvaluationCode1
Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision Language ModelsCode2
Benchmarking Multimodal RAG through a Chart-based Document Question-Answering Generation FrameworkCode0
Measuring Faithfulness of Chains of Thought by Unlearning Reasoning StepsCode1
Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific InformationCode1
ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action ModelCode1
Mitigating Lost-in-Retrieval Problems in Retrieval Augmented Multi-Hop Question Answering0
NLP-AKG: Few-Shot Construction of NLP Academic Knowledge Graph Based on LLM0
Effects of Prompt Length on Domain-specific Tasks for Large Language Models0
EpMAN: Episodic Memory AttentioN for Generalizing to Longer Contexts0
Triangulating LLM Progress through Benchmarks, Games, and Cognitive Tests0
Exploring Advanced Techniques for Visual Question Answering: A Comprehensive Comparison0
Argument-Based Comparative Question Answering Evaluation Benchmark0
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?Code0
MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models0
Towards Adaptive Memory-Based Optimization for Enhanced Retrieval-Augmented Generation0
Sce2DriveX: A Generalized MLLM Framework for Scene-to-Drive Learning0
PitVQA++: Vector Matrix-Low-Rank Adaptation for Open-Ended Visual Question Answering in Pituitary SurgeryCode0
Navigating Semantic Relations: Challenges for Language Models in Abstract Common-Sense Reasoning0
Which of These Best Describes Multiple Choice Evaluation with LLMs? A) Forced B) Flawed C) Fixable D) All of the Above0
Is That Your Final Answer? Test-Time Scaling Improves Selective Question AnsweringCode0
PRIV-QA: Privacy-Preserving Question Answering for Cloud Large Language ModelsCode0
MuDAF: Long-Context Multi-Document Attention Focusing through Contrastive Learning on Attention HeadsCode0
Quantifying Memorization and Retriever Performance in Retrieval-Augmented Vision-Language Models0
RGAR: Recurrence Generation-augmented Retrieval for Factual-aware Medical Question Answering0
MCTS-KBQA: Monte Carlo Tree Search for Knowledge Base Question Answering0
PeerQA: A Scientific Question Answering Dataset from Peer ReviewsCode1
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models0
TabSD: Large Free-Form Table Question Answering with SQL-Based Table Decomposition0
DH-RAG: A Dynamic Historical Context-Powered Retrieval-Augmented Generation Method for Multi-Turn Dialogue0
TrustRAG: An Information Assistant with Retrieval Augmented GenerationCode5
Multilingual European Language Models: Benchmarking Approaches and Challenges0
Savaal: Scalable Concept-Driven Question Generation to Enhance Human Learning0
Show:102550
← PrevPage 18 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified