SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 42514275 of 10817 papers

TitleStatusHype
Wav2Prompt: End-to-End Speech Prompt Generation and Tuning For LLM in Zero and Few-shot Learning0
Are Large Vision Language Models up to the Challenge of Chart Comprehension and Reasoning? An Extensive Investigation into the Capabilities and Limitations of LVLMs0
SPAGHETTI: Open-Domain Question Answering from Heterogeneous Data Sources with Retrieval and Semantic Parsing0
Unraveling and Mitigating Retriever Inconsistencies in Retrieval-Augmented Large Language ModelsCode0
Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation0
Learning to Clarify: Multi-turn Conversations with Action-Based Contrastive Self-Training0
Passage-specific Prompt Tuning for Passage Reranking in Question Answering with Large Language ModelsCode0
Video Question Answering for People with Visual Impairments Using an Egocentric 360-Degree Camera0
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals0
VQA Training Sets are Self-play Environments for Generating Few-shot Pools0
Evaluating Zero-Shot GPT-4V Performance on 3D Visual Question Answering Benchmarks0
MetaToken: Detecting Hallucination in Image Descriptions by Meta Classification0
PathReasoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering0
A Multi-Source Retrieval Question Answering Framework Based on RAG0
MASSIVE Multilingual Abstract Meaning Representation: A Dataset and Baselines for Hallucination Detection0
Two-Layer Retrieval-Augmented Generation Framework for Low-Resource Medical Question Answering Using Reddit Data: Proof-of-Concept Study0
Bridging the Gap: Dynamic Learning Strategies for Improving Multilingual Performance in LLMs0
RealitySummary: Exploring On-Demand Mixed Reality Text Summarization and Question Answering using Large Language Models0
Data-augmented phrase-level alignment for mitigating object hallucination0
ATM: Adversarial Tuning Multi-agent System Makes a Robust Retrieval-Augmented GeneratorCode0
Conv-CoA: Improving Open-domain Question Answering in Large Language Models via Conversational Chain-of-Action0
Peering into the Mind of Language Models: An Approach for Attribution in Contextual Question AnsweringCode0
Aligning LLMs through Multi-perspective User Preference Ranking-based Feedback for Programming Question Answering0
Can Large Language Models Faithfully Express Their Intrinsic Uncertainty in Words?0
Do Vision-Language Transformers Exhibit Visual Commonsense? An Empirical Study of VCR0
Show:102550
← PrevPage 171 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified