SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 37813790 of 10817 papers

TitleStatusHype
Systematic Assessment of Factual Knowledge in Large Language Models0
Open Information Extraction: A Review of Baseline Techniques, Approaches, and Applications0
Gold: A Global and Local-aware Denoising Framework for Commonsense Knowledge Graph Noise DetectionCode0
Open-ended Commonsense Reasoning with Unrestricted Answer Scope0
Alexpaca: Learning Factual Clarification Question Generation Without Examples0
QADYNAMICS: Training Dynamics-Driven Synthetic QA Diagnostic for Zero-Shot Commonsense Question AnsweringCode0
Emulating Human Cognitive Processes for Expert-Level Medical Question-Answering with Large Language Models0
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-ReflectionCode4
UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large ModelsCode0
If the Sources Could Talk: Evaluating Large Language Models for Research Assistance in HistoryCode0
Show:102550
← PrevPage 379 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified