SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 45914600 of 10817 papers

TitleStatusHype
Integrating SPARQL and LLMs for Question Answering over Scholarly Data Sources0
Biomedical Evidence Generation Engine0
Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering0
Joint Semantics and Data-Driven Path Representation for Knowledge Graph Inference0
DataFrame QA: A Universal LLM Framework on DataFrame Question Answering Without Data Exposure0
Data-efficient Meta-models for Evaluation of Context-based Questions and Answers in LLMs0
Data-Efficient French Language Modeling with CamemBERTa0
Data-Efficient Autoregressive Document Retrieval for Fact Verification0
Automatic Dataset Generation for Knowledge Intensive Question Answering Tasks0
Data-Driven Calibration of Prediction Sets in Large Vision-Language Models Based on Inductive Conformal Prediction0
Show:102550
← PrevPage 460 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified