SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 18011825 of 10817 papers

TitleStatusHype
AdaCAD: Adaptively Decoding to Balance Conflicts between Contextual and Parametric KnowledgeCode1
Large Language Models are Temporal and Causal Reasoners for Video Question AnsweringCode1
Large Language Models Reflect the Ideology of their CreatorsCode1
Language Models are Unsupervised Multitask LearnersCode1
Language Models as Science TutorsCode1
CBench: Towards Better Evaluation of Question Answering Over Knowledge GraphsCode1
Language-Informed Visual Concept LearningCode1
CBR-RAG: Case-Based Reasoning for Retrieval Augmented Generation in LLMs for Legal Question AnsweringCode1
Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question AnsweringCode1
I2I: Initializing Adapters with Improvised KnowledgeCode1
CCQA: A New Web-Scale Question Answering Dataset for Model Pre-TrainingCode1
Callee: Recovering Call Graphs for Binaries with Transfer and Contrastive LearningCode1
CommonsenseQA: A Question Answering Challenge Targeting Commonsense KnowledgeCode1
Lever LM: Configuring In-Context Sequence to Lever Large Vision Language ModelsCode1
IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language ModelCode1
Cerbero-7B: A Leap Forward in Language-Specific LLMs Through Enhanced Chat Corpus Generation and EvaluationCode1
RUArt: A Novel Text-Centered Solution for Text-Based Visual Question AnsweringCode1
RuBioRoBERTa: a pre-trained biomedical language model for Russian language biomedical text miningCode1
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and LanguagesCode1
RuMedBench: A Russian Medical Language Understanding BenchmarkCode1
Are Deep Neural Networks SMARTer than Second Graders?Code1
R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question AnsweringCode1
Language Models Learn to Mislead Humans via RLHFCode1
LaMPP: Language Models as Probabilistic Priors for Perception and ActionCode1
Combo of Thinking and Observing for Outside-Knowledge VQACode1
Show:102550
← PrevPage 73 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified