SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 61016150 of 10817 papers

TitleStatusHype
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?0
MC\^2: Multi-perspective Convolutional Cube for Conversational Machine Reading Comprehension0
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?0
A Survey on Why-Type Question Answering Systems0
Human-Adversarial Visual Question Answering0
Human Adversarial QA: Did the Model Understand the Paragraph?0
Huge Automatically Extracted Training-Sets for Multilingual Word SenseDisambiguation0
MCQA: Multimodal Co-attention Based Network for Question Answering0
MCR-Net: A Multi-Step Co-Interactive Relation Network for Unanswerable Questions on Machine Reading Comprehension0
MCSFF: Multi-modal Consistency and Specificity Fusion Framework for Entity Alignment0
MCTS-KBQA: Monte Carlo Tree Search for Knowledge Base Question Answering0
E3D-GPT: Enhanced 3D Visual Foundation for Medical Vision-Language Model0
Contrastive Data and Learning for Natural Language Processing0
A survey on VQA_Datasets and Approaches0
Mitigating Knowledge Conflicts in Language Model-Driven Question Answering0
HRVQA: A Visual Question Answering Benchmark for High-Resolution Aerial Images0
EACO: Enhancing Alignment in Multimodal LLMs via Critical Observation0
Meaningful Answer Generation of E-Commerce Question-Answering0
HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering0
Measuring an Artificial Intelligence System's Performance on a Verbal IQ Test For Young Children0
Contrastive Cross-Modal Knowledge Sharing Pre-training for Vision-Language Representation Learning and Retrieval0
Biomedical Question Answering: A Survey of Approaches and Challenges0
Mitigating Clickbait: An Approach to Spoiler Generation Using Multitask Learning0
Measuring CLEVRness: Black-box Testing of Visual Reasoning Models0
Measuring CLEVRness: Blackbox testing of Visual Reasoning Models0
Measuring Compositional Consistency for Video Question Answering0
HRCA+: Advanced Multiple-choice Machine Reading Comprehension Method0
HPI Question Answering System in BioASQ 20160
Measuring Domain Portability and ErrorPropagation in Biomedical QA0
Biomedical Question Answering via Weighted Neural Network Passage Retrieval0
A Survey on Table Question Answering: Recent Advances0
How You Ask Matters: The Effect of Paraphrastic Questions to BERT Performance on a Clinical SQuAD Dataset0
Addressing Semantic Drift in Generative Question Answering with Auxiliary Extraction0
Measuring Popularity of Machine-Generated Sentences Using Term Count, Document Frequency, and Dependency Language Model0
Mitigating Bias for Question Answering Models by Tracking Bias Influence0
Measuring Retrieval Complexity in Question Answering Systems0
Mitigating Hallucination in Visual-Language Models via Re-Balancing Contrastive Decoding0
Measuring Sentences Similarity: A Survey0
Mitigating Large Language Model Hallucination with Faithful Finetuning0
Measuring the Limit of Semantic Divergence for English Tweets.0
MEBench: Benchmarking Large Language Models for Cross-Document Multi-Entity Question Answering0
Mitigating Lost-in-Retrieval Problems in Retrieval Augmented Multi-Hop Question Answering0
Continuous Training and Fine-tuning for Domain-Specific Language Models in Medical Question Answering0
How well do Computers Solve Math Word Problems? Large-Scale Dataset Construction and Evaluation0
A Survey on Table-and-Text HybridQA: Concepts, Methods, Challenges and Future Directions0
How Well can We Learn Interpretable Entity Types from Text?0
How Well Can Vison-Language Models Understand Humans' Intention? An Open-ended Theory of Mind Question Evaluation Benchmark0
How Vision-Language Tasks Benefit from Large Pre-trained Models: A Survey0
Echo-Attention: Attend Once and Get N Attentions for Free0
How Transferable are Reasoning Patterns in VQA?0
Show:102550
← PrevPage 123 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified