SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 13111320 of 10817 papers

TitleStatusHype
H-Mem: Harnessing synaptic plasticity with Hebbian Memory NetworksCode1
Conversational Question Answering over Knowledge Graphs with Transformer and Graph Attention NetworksCode1
ChiMed-GPT: A Chinese Medical Large Language Model with Full Training Regime and Better Alignment to Human PreferencesCode1
ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question AnsweringCode1
AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the WebCode1
KELM: Knowledge Enhanced Pre-Trained Language Representations with Message Passing on Hierarchical Relational GraphsCode1
HiTab: A Hierarchical Table Dataset for Question Answering and Natural Language GenerationCode1
ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument MiningCode1
ChineseEcomQA: A Scalable E-commerce Concept Evaluation Benchmark for Large Language ModelsCode1
HittER: Hierarchical Transformers for Knowledge Graph EmbeddingsCode1
Show:102550
← PrevPage 132 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified