SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 1075110800 of 10817 papers

TitleStatusHype
TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question AnsweringCode0
What Ingredients Make for an Effective Crowdsourcing Protocol for Difficult NLU Data Collection Tasks?Code0
XLTime: A Cross-Lingual Knowledge Transfer Framework for Temporal Expression ExtractionCode0
Self-Consistency of Large Language Models under AmbiguityCode0
SemEval-2017 Task 3: Community Question AnsweringCode0
Visual Question Answering: Datasets, Algorithms, and Future ChallengesCode0
Short Text Conversation Based on Deep Neural Network and Analysis on Evaluation MeasuresCode0
Visual Question Answering From Another Perspective: CLEVR Mental Rotation TestsCode0
Untangle the KNOT: Interweaving Conflicting Knowledge and Reasoning Skills in Large Language ModelsCode0
TF-Ranking: Scalable TensorFlow Library for Learning-to-RankCode0
What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task LearningCode0
Unveiling Divergent Inductive Biases of LLMs on Temporal DataCode0
SelfCite: Self-Supervised Alignment for Context Attribution in Large Language ModelsCode0
Select, Substitute, Search: A New Benchmark for Knowledge-Augmented Visual Question AnsweringCode0
Unveiling Uncertainty: A Deep Dive into Calibration and Performance of Multimodal Large Language ModelsCode0
Shortcomings of Question Answering Based Factuality Frameworks for Error LocalizationCode0
Shaking Up VLMs: Comparing Transformers and Structured State Space Models for Vision & Language ModelingCode0
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language ModelsCode0
SG-Net: Syntax-Guided Machine Reading ComprehensionCode0
SemanticZ at SemEval-2016 Task 3: Ranking Relevant Answers in Community Question Answering Using Semantic Similarity Based on Fine-tuned Word EmbeddingsCode0
Visual Question Answering using Deep Learning: A Survey and Performance AnalysisCode0
Scent of Knowledge: Optimizing Search-Enhanced Reasoning with Information ForagingCode0
WiSeBE: Window-based Sentence Boundary EvaluationCode0
UProp: Investigating the Uncertainty Propagation of LLMs in Multi-Step Agentic Decision-MakingCode0
Text Understanding with the Attention Sum Reader NetworkCode0
UQA: Corpus for Urdu Question AnsweringCode0
Visual Question Answering: which investigated applications?Code0
Selective Token Generation for Few-shot Natural Language GenerationCode0
UrduFactCheck: An Agentic Fact-Checking Framework for Urdu with Evidence Boosting and BenchmarkingCode0
What's Different between Visual Question Answering for Machine "Understanding" Versus for Accessibility?Code0
Selective Question Answering under Domain ShiftCode0
What's in a Name? Answer Equivalence For Open-Domain Question AnsweringCode0
What’s in a Name? Answer Equivalence For Open-Domain Question AnsweringCode0
Selection-based Question Answering of an MOOCCode0
Visual-RAG: Benchmarking Text-to-Image Retrieval Augmented Generation for Visual Knowledge Intensive QueriesCode0
Visual Reasoning with Multi-hop Feature ModulationCode0
SEER : A Knapsack approach to Exemplar Selection for In-Context HybridQACode0
Speech-Based Visual Question AnsweringCode0
Semantic Search as Extractive Paraphrase Span DetectionCode0
What's Missing: A Knowledge Gap Guided Approach for Multi-hop Question AnsweringCode0
Black-box Model Ensembling for Textual and Visual Question Answering via Information FusionCode0
Visual Text Matters: Improving Text-KVQA with Visual Text Entity Knowledge-aware Large Multimodal AssistantCode0
SEVEN: Pruning Transformer Model by Reserving SentinelsCode0
What value do explicit high level concepts have in vision to language problems?Code0
AlignedCoT: Prompting Large Language Models via Native-Speaking DemonstrationsCode0
ACCORD: Closing the Commonsense Measurability GapCode0
Word2Bits - Quantized Word VectorsCode0
ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in ImagesCode0
Speak It Out: Solving Symbol-Related Problems with Symbol-to-Language Conversion for Language ModelsCode0
Seeing the wood for the trees: a contrastive regularization method for the low-resource Knowledge Base Question AnsweringCode0
Show:102550
← PrevPage 216 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified