SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 29512975 of 10817 papers

TitleStatusHype
MHQA: A Diverse, Knowledge Intensive Mental Health Question Answering Challenge for Language Models0
Argument-Based Comparative Question Answering Evaluation Benchmark0
MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models0
Benchmarking Multimodal RAG through a Chart-based Document Question-Answering Generation FrameworkCode0
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?Code0
On the Influence of Context Size and Model Choice in Retrieval-Augmented Generation SystemsCode0
NLP-AKG: Few-Shot Construction of NLP Academic Knowledge Graph Based on LLM0
Exploring Advanced Techniques for Visual Question Answering: A Comprehensive Comparison0
Effects of Prompt Length on Domain-specific Tasks for Large Language Models0
Mitigating Lost-in-Retrieval Problems in Retrieval Augmented Multi-Hop Question Answering0
Is Relevance Propagated from Retriever to Generator in RAG?0
Triangulating LLM Progress through Benchmarks, Games, and Cognitive Tests0
EpMAN: Episodic Memory AttentioN for Generalizing to Longer Contexts0
MuDAF: Long-Context Multi-Document Attention Focusing through Contrastive Learning on Attention HeadsCode0
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models0
MCTS-KBQA: Monte Carlo Tree Search for Knowledge Base Question Answering0
Towards Adaptive Memory-Based Optimization for Enhanced Retrieval-Augmented Generation0
PRIV-QA: Privacy-Preserving Question Answering for Cloud Large Language ModelsCode0
DH-RAG: A Dynamic Historical Context-Powered Retrieval-Augmented Generation Method for Multi-Turn Dialogue0
Navigating Semantic Relations: Challenges for Language Models in Abstract Common-Sense Reasoning0
Sce2DriveX: A Generalized MLLM Framework for Scene-to-Drive Learning0
Quantifying Memorization and Retriever Performance in Retrieval-Augmented Vision-Language Models0
RGAR: Recurrence Generation-augmented Retrieval for Factual-aware Medical Question Answering0
PitVQA++: Vector Matrix-Low-Rank Adaptation for Open-Ended Visual Question Answering in Pituitary SurgeryCode0
Which of These Best Describes Multiple Choice Evaluation with LLMs? A) Forced B) Flawed C) Fixable D) All of the Above0
Show:102550
← PrevPage 119 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified