SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 79017925 of 10817 papers

TitleStatusHype
Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation0
Question-to-Question Retrieval for Hallucination-Free Knowledge Access: An Approach for Wikipedia and Wikidata Question Answering0
Inverse Visual Question Answering with Multi-Level Attentions0
Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering0
Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool0
Invar-RAG: Invariant LLM-aligned Retrieval for Better Generation0
Automated Answer Validation using Text Similarity0
QUINT: Interpretable Question Answering over Knowledge Bases0
CSS: Combining Self-training and Self-supervised Learning for Few-shot Dialogue State Tracking0
Introduction to Neural Network based Approaches for Question Answering over Knowledge Graphs0
Introduction of a Probabilistic Language Model to Non-Factoid Question Answering Using Example Q\&A Pairs0
CSReader at SemEval-2018 Task 11: Multiple Choice Question Answering as Textual Entailment0
AutoKnow: Self-Driving Knowledge Collection for Products of Thousands of Types0
QurAna: Corpus of the Quran annotated with Pronominal Anaphora0
An Audio-enriched BERT-based Framework for Spoken Multiple-choice Question Answering0
Introduction method for argumentative dialogue using paired question-answering interchange about personality0
Introducing Semantics into Speech Encoders0
Introducing RezoJDM16k: a French KnowledgeGraph DataSet for Link Prediction0
CS-NLP team at SemEval-2020 Task 4: Evaluation of State-of-the-art NLP Deep Learning Architectures on Commonsense Reasoning Task0
R3: A Reading Comprehension Benchmark Requiring Reasoning Processes0
R3 : Refined Retriever-Reader pipeline for Multidoc2dial0
Introducing "Forecast Utterance" for Conversational Data Science0
R4: Reinforced Retriever-Reorder-Responder for Retrieval-Augmented Large Language Models0
RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training0
CSE-SFP: Enabling Unsupervised Sentence Representation Learning via a Single Forward Pass0
Show:102550
← PrevPage 317 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified