SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 30313040 of 10817 papers

TitleStatusHype
A General and Flexible Multi-concept Parsing Framework for Multilingual Semantic Matching0
Enhancing Generalization in Medical Visual Question Answering Tasks via Gradient-Guided Model Perturbation0
Evidence-Focused Fact Summarization for Knowledge-Augmented Zero-Shot Question AnsweringCode0
MOKA: Open-World Robotic Manipulation through Mark-Based Visual Prompting0
Reliable, Adaptable, and Attributable Language Models with Retrieval0
Modeling Collaborator: Enabling Subjective Vision Classification With Minimal Human Effort via LLM Tool-Use0
An Improved Traditional Chinese Evaluation Suite for Foundation Model0
Brilla AI: AI Contestant for the National Science and Maths QuizCode1
To Generate or to Retrieve? On the Effectiveness of Artificial Contexts for Medical Open-Domain Question AnsweringCode1
Vision-Language Models for Medical Report Generation and Visual Question Answering: A ReviewCode3
Show:102550
← PrevPage 304 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified