SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 40764100 of 10817 papers

TitleStatusHype
The Illusion of Competence: Evaluating the Effect of Explanations on Users' Mental Models of Visual Question Answering SystemsCode0
Changing Answer Order Can Decrease MMLU Accuracy0
TrustUQA: A Trustful Framework for Unified Structured Data Question AnsweringCode0
Length Optimization in Conformal PredictionCode0
Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature DistillationCode0
Handling Ontology Gaps in Semantic ParsingCode0
Disentangling Knowledge-based and Visual Reasoning by Question Decomposition in KB-VQA0
Context Matters: An Empirical Study of the Impact of Contextual Information in Temporal Question Answering Systems0
Explicit Diversity Conditions for Effective Question Answer Generation with Large Language Models0
Geode: A Zero-shot Geospatial Question-Answering Agent with Explicit Reasoning and Precise Spatio-Temporal Retrieval0
Sanskrit Knowledge-based Systems: Annotation and Computational Tools0
Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and PromptsCode0
Entropy-Based Decoding for Retrieval-Augmented Large Language ModelsCode0
CaLMQA: Exploring culturally specific long-form question answering across 23 languagesCode0
Advancing Question Answering on Handwritten Documents: A State-of-the-Art Recognition-Based Model for HW-SQuAD0
Zero-Shot Long-Form Video Understanding through Screenplay0
Claude 3.5 Sonnet Model Card Addendum0
UniPSDA: Unsupervised Pseudo Semantic Data Augmentation for Zero-Shot Cross-Lingual Natural Language UnderstandingCode0
Is your benchmark truly adversarial? AdvScore: Evaluating Human-Grounded Adversarialness0
GPT-4V Explorations: Mining Autonomous Driving0
Directed Domain Fine-Tuning: Tailoring Separate Modalities for Specific Training Tasks0
Modulating Language Model Experiences through Frictions0
MM-SpuBench: Towards Better Understanding of Spurious Biases in Multimodal LLMs0
Attention Instruction: Amplifying Attention in the Middle via PromptingCode0
Training-Free Exponential Context Extension via Cascading KV CacheCode0
Show:102550
← PrevPage 164 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified