SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 21512160 of 10817 papers

TitleStatusHype
CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding EvaluationCode1
Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMsCode1
CuriousLLM: Elevating Multi-Document QA with Reasoning-Infused Knowledge Graph PromptingCode1
Consistency-preserving Visual Question Answering in Medical ImagingCode1
Compositional Semantic Parsing on Semi-Structured TablesCode1
D2S: Document-to-Slide Generation Via Query-Based Text SummarizationCode1
CypherBench: Towards Precise Retrieval over Full-scale Modern Knowledge Graphs in the LLM EraCode1
CXR-LLAVA: a multimodal large language model for interpreting chest X-ray imagesCode1
A Dataset for Statutory Reasoning in Tax Law Entailment and Question AnsweringCode1
LongHealth: A Question Answering Benchmark with Long Clinical DocumentsCode1
Show:102550
← PrevPage 216 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified