SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 63266350 of 10817 papers

TitleStatusHype
SD-QA: Spoken Dialectal Question Answering for the Real WorldCode1
Investigating Post-pretraining Representation Alignment for Cross-Lingual Question AnsweringCode0
How to find a good image-text embedding for remote sensing visual question answering?0
Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical OverlapCode0
BiRdQA: A Bilingual Dataset for Question Answering on Tricky Riddles0
ParaShoot: A Hebrew Question Answering DatasetCode0
Towards Universal Dense Retrieval for Open-domain Question Answering0
A Simple Approach to Jointly Rank Passages and Select Relevant Sentences in the OBQA Context0
NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering DatasetCode0
Recursively Summarizing Books with Human Feedback0
Salience-Aware Event Chain Modeling for Narrative Understanding0
K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering0
RETRONLU: Retrieval Augmented Task-Oriented Semantic Parsing0
What Would it Take to get Biomedical QA Systems into Practice?0
Blindness to Modality Helps Entailment Graph MiningCode0
Does Vision-and-Language Pretraining Improve Lexical Grounding?Code1
Relation-Guided Pre-Training for Open-Domain Question Answering0
Modality and Negation in Event ExtractionCode0
Emily: Developing An Emotion-affective Open-Domain Chatbot with Knowledge Graph-based Persona0
Complex Temporal Question Answering on Knowledge GraphsCode1
Supervised Relation Classification as Two-way Span-Prediction0
Machine Reading Comprehension: Generative or Extractive Reader?0
Transformers Can Compose Skills To Solve Novel Problems Without Finetuning0
Benchmarking Answer Verification Methods for Question Answering-Based Summarization Evaluation Metrics0
CodeQA: A Question Answering Dataset for Source Code ComprehensionCode1
Show:102550
← PrevPage 254 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified