SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 18411850 of 10817 papers

TitleStatusHype
Detect, Describe, Discriminate: Moving Beyond VQA for MLLM Evaluation0
GEM-RAG: Graphical Eigen Memories For Retrieval Augmented Generation0
Boosting Healthcare LLMs Through Retrieved ContextCode1
Using Similarity to Evaluate Factual Consistency in Summaries0
Towards Efficient and Robust VQA-NLE Data Generation with Large Vision-Language ModelsCode0
A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor?0
Can CLIP Count Stars? An Empirical Study on Quantity Bias in CLIP0
LINKAGE: Listwise Ranking among Varied-Quality References for Non-Factoid QA Evaluation via LLMs0
Scene-Text Grounding for Text-Based Video Question AnsweringCode1
Evaluating the Performance and Robustness of LLMs in Materials Science Q&A and Property Predictions0
Show:102550
← PrevPage 185 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified