SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 40014025 of 10817 papers

TitleStatusHype
Handling Ontology Gaps in Semantic ParsingCode0
Help Me Identify: Is an LLM+VQA System All We Need to Identify Visual Concepts?Code0
GYM at Qur’an QA 2023 Shared Task: Multi-Task Transfer Learning for Quranic Passage Retrieval and Question Answering with Large Language ModelsCode0
Frame- and Entity-Based Knowledge for Common-Sense Argumentative ReasoningCode0
GW-MoE: Resolving Uncertainty in MoE Router with Global Workspace TheoryCode0
Guiding Extractive Summarization with Question-Answering RewardsCode0
A Memory-Network Based Solution for Multivariate Time-Series ForecastingCode0
Guiding Vision-Language Model Selection for Visual Question-Answering Across Tasks, Domains, and Knowledge TypesCode0
Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge GraphCode0
GUIDEQ: Framework for Guided Questioning for progressive informational collection and classificationCode0
Grounding Answers for Visual Questions Asked by Visually Impaired PeopleCode0
Grounded Graph Decoding Improves Compositional Generalization in Question AnsweringCode0
Towards Flexible Evaluation for Generative Visual Question AnsweringCode0
Faithful Embeddings for Knowledge Base QueriesCode0
A dataset and exploration of models for understanding video data through fill-in-the-blank question-answeringCode0
Scaling Reasoning can Improve Factuality in Large Language ModelsCode0
Graph Learning in the Era of LLMs: A Survey from the Perspective of Data, Models, and TasksCode0
Fine-Grained Stateful Knowledge Exploration: A Novel Paradigm for Integrating Knowledge Graphs with Large Language ModelsCode0
Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference ResolutionCode0
Comparing Attention-based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading ComprehensionCode0
AmazonQA: A Review-Based Question Answering TaskCode0
Comparative Study of Machine Learning Models and BERT on SQuADCode0
Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question AnsweringCode0
A mathematical model for universal semanticsCode0
GraphextQA: A Benchmark for Evaluating Graph-Enhanced Large Language ModelsCode0
Show:102550
← PrevPage 161 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified