SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 56765700 of 10817 papers

TitleStatusHype
TABi: Type-Aware Bi-Encoders for Open-Domain Entity RetrievalCode1
ArcaneQA: Dynamic Program Induction and Contextualized Encoding for Knowledge Base Question AnsweringCode1
Attention Mechanism based Cognition-level Scene Understanding0
WikiOmnia: generative QA corpus on the whole Russian Wikipedia0
Calibrating Trust of Multi-Hop Question Answering Systems with Decompositional Probes0
Semantic Structure based Query Graph Prediction for Question Answering over Knowledge Graph0
Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context NLP ModelsCode0
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided AdaptationCode1
Improving Passage Retrieval with Zero-Shot Question GenerationCode1
Mixture of Experts for Biomedical Question Answering0
Towards Fine-grained Causal Reasoning and QACode1
Improving Cross-Modal Understanding in Visual Dialog via Contrastive Learning0
Exploring Dual Encoder Architectures for Question AnsweringCode1
Measuring Compositional Consistency for Video Question Answering0
XLMRQA: Open-Domain Question Answering on Vietnamese Wikipedia-based Textual Knowledge Source0
Can Question Rewriting Help Conversational Question Answering?Code1
AGQA 2.0: An Updated Benchmark for Compositional Spatio-Temporal Reasoning0
ASQA: Factoid Questions Meet Long-Form AnswersCode0
Solving Price Per Unit Problem Around the World: Formulating Fact Extraction as Question Answering0
XQA-DST: Multi-Domain and Multi-Lingual Dialogue State TrackingCode0
MuCoT: Multilingual Contrastive Training for Question-Answering in Low-resource LanguagesCode0
Answering Count Queries with Explanatory EvidenceCode0
Uniform Complexity for Text GenerationCode0
Metaethical Perspectives on 'Benchmarking' AI Ethics0
Breaking Character: Are Subwords Good Enough for MRLs After All?0
Show:102550
← PrevPage 228 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified