SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 79017925 of 10817 papers

TitleStatusHype
Structured Knowledge Grounding for Question Answering0
Question-to-Question Retrieval for Hallucination-Free Knowledge Access: An Approach for Wikipedia and Wikidata Question Answering0
Gpt-4: A Review on Advancements and Opportunities in Natural Language Processing0
GPT-3 Models are Few-Shot Financial Reasoners0
GOVERN: Gradient Orientation Vote Ensemble for Multi-Teacher Reinforced Distillation0
Flexible Frame Selection for Efficient Video Reasoning0
Complex Question Answering: Unsupervised Learning Approaches and Experiments0
QUINT: Interpretable Question Answering over Knowledge Bases0
AssistPDA: An Online Video Surveillance Assistant for Video Anomaly Prediction, Detection, and Analysis0
QU-IR at SemEval 2016 Task 3: Learning to Rank on Arabic Community Question Answering Forums with Word Embedding0
Assistive Image Annotation Systems with Deep Learning and Natural Language Capabilities: A Review0
GoT-CQA: Graph-of-Thought Guided Compositional Reasoning for Chart Question Answering0
QuOTE: Question-Oriented Text Embeddings0
QurAna: Corpus of the Quran annotated with Pronominal Anaphora0
Goodwill Hunting: Analyzing and Repurposing Off-the-Shelf Named Entity Linking Systems0
Complex Question Answering on knowledge graphs using machine translation and multi-task learning0
Good, Great, Excellent: Global Inference of Semantic Intensities0
Complex QA and language models hybrid architectures, Survey0
R2GQA: Retriever-Reader-Generator Question Answering System to Support Students Understanding Legal Regulations in Higher Education0
R3: A Reading Comprehension Benchmark Requiring Reasoning Processes0
R3 : Refined Retriever-Reader pipeline for Multidoc2dial0
FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts0
R4: Reinforced Retriever-Reorder-Responder for Retrieval-Augmented Large Language Models0
RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training0
ReAgent: Reversible Multi-Agent Reasoning for Knowledge-Enhanced Multi-Hop QA0
Show:102550
← PrevPage 317 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified