SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 92769300 of 10817 papers

TitleStatusHype
COV19IR : COVID-19 Domain Literature Information RetrievalCode0
JPAVE: A Generation and Classification-based Model for Joint Product Attribute Prediction and Value ExtractionCode0
Rethinking the Objectives of Extractive Question AnsweringCode0
RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language ModelsCode0
MphayaNER: Named Entity Recognition for TshivendaCode0
Judging the Judges: Can Large Vision-Language Models Fairly Evaluate Chart Comprehension and Reasoning?Code0
ANetQA: A Large-scale Benchmark for Fine-grained Compositional Reasoning over Untrimmed VideosCode0
Coupling Context Modeling with Zero Pronoun Recovering for Document-Level Natural Language GenerationCode0
Counting Everyday Objects in Everyday ScenesCode0
HeroNet: A Hybrid Retrieval-Generation Network for Conversational BotsCode0
Just ASR + LLM? A Study on Speech Large Language Models' Ability to Identify and Understand Speaker in Spoken DialogueCode0
'Just because you are right, doesn't mean I am wrong': Overcoming a Bottleneck in the Development and Evaluation of Open-Ended Visual Question Answering (VQA) TasksCode0
Quasar: Datasets for Question Answering by Search and ReadingCode0
Just ClozE! A Novel Framework for Evaluating the Factual Consistency Faster in Abstractive SummarizationCode0
Help Me Identify: Is an LLM+VQA System All We Need to Identify Visual Concepts?Code0
KaFSP: Knowledge-Aware Fuzzy Semantic Parsing for Conversational Question Answering over a Large-Scale Knowledge BaseCode0
MQA: Answering the Question via Robotic ManipulationCode0
KaggleDBQA: Realistic Evaluation of Text-to-SQL ParsersCode0
A Better Way to Attend: Attention with Trees for Video Question AnsweringCode0
Counterfactual Learning from Human Proofreading Feedback for Semantic ParsingCode0
Helmsman of the Masses? Evaluate the Opinion Leadership of Large Language Models in the Werewolf GameCode0
HCqa: Hybrid and Complex Question Answering on Textual Corpus and Knowledge GraphCode0
ParaQA: A Question Answering Dataset with Paraphrase Responses for Single-Turn ConversationCode0
M-QALM: A Benchmark to Assess Clinical Reading Comprehension and Knowledge Recall in Large Language Models via Question AnsweringCode0
MQDD: Pre-training of Multimodal Question Duplicity Detection for Software Engineering DomainCode0
Show:102550
← PrevPage 372 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified