SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 60016025 of 10817 papers

TitleStatusHype
M3SciQA: A Multi-Modal Multi-Document Scientific QA Benchmark for Evaluating Foundation Models0
M4CXR: Exploring Multi-task Potentials of Multi-modal Large Language Models for Chest X-ray Interpretation0
Learning-to-Defer for Extractive Question Answering0
Disentangling Online Chats with DAG-Structured LSTMs0
Learning to Decompose Compound Questions with Reinforcement Learning0
Learning to Coordinate Multiple Reinforcement Learning Agents for Diverse Query Reformulation0
Machine Comprehension with Syntax, Frames, and Semantics0
Machine Knowledge: Creation and Curation of Comprehensive Knowledge Bases0
Disentangling Knowledge-based and Visual Reasoning by Question Decomposition in KB-VQA0
Machine Reading Comprehension: Generative or Extractive Reader?0
Machine Translation Evaluation for Arabic using Morphologically-enriched Embeddings0
Machine Translation Evaluation Meets Community Question Answering0
ADVISE: Symbolism and External Knowledge for Decoding Advertisements0
Macquarie University at BioASQ 5b -- Query-based Summarisation Techniques for Selecting the Ideal Answers0
Learning to Compute Word Embeddings On the Fly0
Learning to Compress Contexts for Efficient Knowledge-based Visual Question Answering0
Learning to Compose Diversified Prompts for Image Emotion Classification0
Disease Identification From Unstructured User Input0
Learning to Collaborate for Question Answering and Asking0
Learning to Clarify: Multi-turn Conversations with Action-Based Contrastive Self-Training0
Learning to Clarify by Reinforcement Learning Through Reward-Weighted Fine-Tuning0
Discriminating between Similar Languages with Word-level Convolutional Neural Networks0
Benchmarking Large Multimodal Models for Ophthalmic Visual Question Answering with OphthalWeChat0
Make Every Example Count: On the Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets0
DiscreteSLU: A Large Language Model with Self-Supervised Discrete Speech Units for Spoken Language Understanding0
Show:102550
← PrevPage 241 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified