SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 57765800 of 10817 papers

TitleStatusHype
Lifting the Curse of Multilinguality by Pre-training Modular Transformers0
Light as Deception: GPT-driven Natural Relighting Against Vision-Language Pre-training Models0
Lighter And Better: Towards Flexible Context Adaptation For Retrieval Augmented Generation0
LightPAL: Lightweight Passage Retrieval for Open Domain Multi-Document Summarization0
Lightweight Convolutional Approaches to Reading Comprehension on SQuAD0
Lightweight In-Context Tuning for Multimodal Unified Models0
Does Similarity Matter? The Case of Answer Extraction from Technical Discussion Forums0
A Three-Step Transition-Based System for Non-Projective Dependency Parsing0
Machine Reading Comprehension: Generative or Extractive Reader?0
Conversational Question Answering with Reformulations over Knowledge Graph0
Does the Generator Mind its Contexts? An Analysis of Generative Model Faithfulness under Context Transfer0
L’importance des entités pour la tâche de détection d’événements en tant que système de question-réponse (Exploring Entities in Event Detection as Question Answering)0
LIMSIILES: Basic English Substitution for Student Answer Assessment at SemEval 20130
Idest: Learning a Distributed Representation for Event Patterns0
Lingke: A Fine-grained Multi-turn Chatbot for Customer Service0
Machine Comprehension with Syntax, Frames, and Semantics0
Identifying Various Kinds of Event Mentions in K-Parser Output0
A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering0
LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation0
Linguistically-Based Deep Unstructured Question Answering0
Linguistically Driven Graph Capsule Network for Visual Question Reasoning0
Linguistically-Informed Neural Architectures for Lexical, Syntactic and Semantic Tasks in Sanskrit0
Linguistically Motivated Question Classification0
Machine Knowledge: Creation and Curation of Comprehensive Knowledge Bases0
Identifying the Provision of Choices in Privacy Policy Text0
Show:102550
← PrevPage 232 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified