SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 1047610500 of 10817 papers

TitleStatusHype
Temporal Reasoning via Audio Question AnsweringCode0
Towards More Equitable Question Answering Systems: How Much More Data Do You Need?Code0
Using link and content over time for embedding generation in Dynamic Attributed NetworksCode0
Towards Language-guided Visual Recognition via Dynamic ConvolutionsCode0
Semantics-aware BERT for Language UnderstandingCode0
SNAP: A Benchmark for Testing the Effects of Capture Conditions on Fundamental Vision TasksCode0
When to Retrieve: Teaching LLMs to Utilize Information Retrieval EffectivelyCode0
Towards Knowledge-Augmented Visual Question AnsweringCode0
TrustScore: Reference-Free Evaluation of LLM Response TrustworthinessCode0
TrustUQA: A Trustful Framework for Unified Structured Data Question AnsweringCode0
YaleNLP @ PerAnsSumm 2025: Multi-Perspective Integration via Mixture-of-Agents for Enhanced Healthcare QA SummarizationCode0
Towards Interpreting BERT for Reading Comprehension Based QACode0
Template-based Question Answering using Recursive Neural NetworksCode0
Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous SpaceCode0
Towards Interpretable Reinforcement Learning Using Attention Augmented AgentsCode0
Zero-shot Visual Question Answering with Language Model FeedbackCode0
Towards Faithful and Robust LLM Specialists for Evidence-Based Question-AnsweringCode0
TSGP: Two-Stage Generative Prompting for Unsupervised Commonsense Question AnsweringCode0
Towards End-to-End Open Conversational Machine ReadingCode0
Sentence Similarity Learning by Lexical Decomposition and CompositionCode0
TUBench: Benchmarking Large Vision-Language Models on Trustworthiness with Unanswerable QuestionsCode0
Semantic Parsing with Candidate Expressions for Knowledge Base Question AnsweringCode0
Towards Efficient and Robust VQA-NLE Data Generation with Large Vision-Language ModelsCode0
Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge BaseCode0
VQA Therapy: Exploring Answer Differences by Visually Grounding AnswersCode0
Show:102550
← PrevPage 420 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified