SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 92269250 of 10817 papers

TitleStatusHype
MoleculeQA: A Dataset to Evaluate Factual Accuracy in Molecular ComprehensionCode0
Jack the Reader -- A Machine Reading FrameworkCode0
Hierarchical Graph Network for Multi-hop Question AnsweringCode0
A Case Study of Cross-Lingual Zero-Shot Generalization for Classical Languages in LLMsCode0
An Evaluation Framework for Attributed Information Retrieval using Large Language ModelsCode0
Rethinking Label Smoothing on Multi-hop Question AnsweringCode0
Hierarchical Deep Multi-modal Network for Medical Visual Question AnsweringCode0
HICD: Hallucination-Inducing via Attention Dispersion for Contrastive Decoding to Mitigate Hallucinations in Large Language ModelsCode0
Biomedical Event Extraction as Multi-turn Question AnsweringCode0
Monitoring Decomposition Attacks in LLMs with Lightweight Sequential MonitorsCode0
Cross-lingual Information Retrieval with BERTCode0
Biomedical Entity Linking as Multiple Choice Question AnsweringCode0
Monolingual or Multilingual Instruction Tuning: Which Makes a Better AlpacaCode0
A Bias-Variance-Covariance Decomposition of Kernel Scores for Generative ModelsCode0
Functorial Question AnsweringCode0
HeySQuAD: A Spoken Question Answering DatasetCode0
JMLR: Joint Medical LLM and Retrieval Training for Enhancing Reasoning and Professional Question Answering CapabilityCode0
JNLP Team: Deep Learning for Legal Processing in COLIEE 2020Code0
Cross-lingual Inference with A Chinese Entailment GraphCode0
"John is 50 years old, can his son be 65?" Evaluating NLP Models' Understanding of FeasibilityCode0
Joint Answering and Explanation for Visual Commonsense ReasoningCode0
CROPE: Evaluating In-Context Adaptation of Vision and Language Models to Culture-Specific ConceptsCode0
More Accurate Question Answering on FreebaseCode0
CRiskEval: A Chinese Multi-Level Risk Evaluation Benchmark Dataset for Large Language ModelsCode0
A Neuro-Symbolic ASP Pipeline for Visual Question AnsweringCode0
Show:102550
← PrevPage 370 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified