SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 68516875 of 10817 papers

TitleStatusHype
`Just because you are right, doesn't mean I am wrong': Overcoming a bottleneck in development and evaluation of Open-Ended VQA tasks0
Structural Encoding and Pre-training Matter: Adapting BERT for Table-Based Fact Verification0
Complex Question Answering on knowledge graphs using machine translation and multi-task learning0
FeTaQA: Free-form Table Question AnsweringCode1
Towards General Purpose Vision SystemsCode1
CUPID: Adaptive Curation of Pre-training Data for Video-and-Language Representation Learning0
Integrating Subgraph-aware Relation and DirectionReasoning for Question Answering0
Are Bias Mitigation Techniques for Deep Learning Effective?Code1
UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training0
Analysis on Image Set Visual Question Answering0
AGQA: A Benchmark for Compositional Spatio-Temporal Reasoning0
Domain-robust VQA with diverse datasets and methods but no target labels0
SUTD-TrafficQA: A Question Answering Benchmark and an Efficient Network for Video Reasoning over Traffic EventsCode1
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder TransformersCode1
'Just because you are right, doesn't mean I am wrong': Overcoming a Bottleneck in the Development and Evaluation of Open-Ended Visual Question Answering (VQA) TasksCode0
InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence Insertion Problem?0
You Can Do Better! If You Elaborate the Reason When Making Prediction0
A Comprehensive Review of the Video-to-Text ProblemCode1
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models0
On the hidden treasure of dialog in video question answeringCode1
Visual Grounding Strategies for Text-Only Natural Language Processing0
UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New Multitask BenchmarkCode1
Fabula Entropy Indexing: Objective Measures of Story Coherence0
Multi-Modal Answer Validation for Knowledge-Based VQACode1
QuestEval: Summarization Asks for Fact-based EvaluationCode1
Show:102550
← PrevPage 275 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified