SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 42514300 of 10817 papers

TitleStatusHype
SPAGHETTI: Open-Domain Question Answering from Heterogeneous Data Sources with Retrieval and Semantic Parsing0
Are Large Vision Language Models up to the Challenge of Chart Comprehension and Reasoning? An Extensive Investigation into the Capabilities and Limitations of LVLMs0
Wav2Prompt: End-to-End Speech Prompt Generation and Tuning For LLM in Zero and Few-shot Learning0
Learning to Clarify: Multi-turn Conversations with Action-Based Contrastive Self-Training0
Passage-specific Prompt Tuning for Passage Reranking in Question Answering with Large Language ModelsCode0
Unraveling and Mitigating Retriever Inconsistencies in Retrieval-Augmented Large Language ModelsCode0
Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation0
Video Question Answering for People with Visual Impairments Using an Egocentric 360-Degree Camera0
VQA Training Sets are Self-play Environments for Generating Few-shot Pools0
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals0
Evaluating Zero-Shot GPT-4V Performance on 3D Visual Question Answering Benchmarks0
MASSIVE Multilingual Abstract Meaning Representation: A Dataset and Baselines for Hallucination Detection0
PathReasoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering0
Two-Layer Retrieval-Augmented Generation Framework for Low-Resource Medical Question Answering Using Reddit Data: Proof-of-Concept Study0
A Multi-Source Retrieval Question Answering Framework Based on RAG0
MetaToken: Detecting Hallucination in Image Descriptions by Meta Classification0
Peering into the Mind of Language Models: An Approach for Attribution in Contextual Question AnsweringCode0
Bridging the Gap: Dynamic Learning Strategies for Improving Multilingual Performance in LLMs0
RealitySummary: Exploring On-Demand Mixed Reality Text Summarization and Question Answering using Large Language Models0
Data-augmented phrase-level alignment for mitigating object hallucination0
ATM: Adversarial Tuning Multi-agent System Makes a Robust Retrieval-Augmented GeneratorCode0
Conv-CoA: Improving Open-domain Question Answering in Large Language Models via Conversational Chain-of-Action0
Aligning LLMs through Multi-perspective User Preference Ranking-based Feedback for Programming Question Answering0
Can Large Language Models Faithfully Express Their Intrinsic Uncertainty in Words?0
Do Vision-Language Transformers Exhibit Visual Commonsense? An Empirical Study of VCR0
Cost-efficient Knowledge-based Question Answering with Large Language Models0
On Bits and Bandits: Quantifying the Regret-Information Trade-offCode0
Accurate and Nuanced Open-QA Evaluation Through Textual EntailmentCode0
iREL at SemEval-2024 Task 9: Improving Conventional Prompting Methods for Brain TeasersCode0
Streaming Long Video Understanding with Large Language Models0
Comparative Analysis of Open-Source Language Models in Summarizing Medical Text Data0
Generating clickbait spoilers with an ensemble of large language models0
Incremental Comprehension of Garden-Path Sentences by Large Language Models: Semantic Interpretation, Syntactic Re-Analysis, and Attention0
Leveraging Logical Rules in Knowledge Editing: A Cherry on the Top0
Text Generation: A Systematic Literature Review of Tasks, Evaluation, and ChallengesCode0
Prompt-Aware Adapter: Towards Learning Adaptive Visual Tokens for Multimodal Large Language Models0
OptLLM: Optimal Assignment of Queries to Large Language ModelsCode0
Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering0
AGRaME: Any-Granularity Ranking with Multi-Vector Embeddings0
Efficient Medical Question Answering with Knowledge-Augmented Question GenerationCode0
WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models0
Large Language Models Can Self-Correct with Key Condition Verification0
SearchLVLMs: A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up-to-Date Internet Knowledge0
FiDeLiS: Faithful Reasoning in Large Language Model for Knowledge Graph Question Answering0
CrossCheckGPT: Universal Hallucination Ranking for Multimodal Foundation Models0
MentalQA: An Annotated Arabic Corpus for Questions and Answers of Mental Healthcare0
Efficient and Interpretable Information Retrieval for Product Question Answering with Heterogeneous DataCode0
Skin-in-the-Game: Decision Making via Multi-Stakeholder Alignment in LLMs0
Backpropagation-Free Multi-modal On-Device Model Adaptation via Cloud-Device Collaboration0
Dataset and Benchmark for Urdu Natural Scenes Text Detection, Recognition and Visual Question AnsweringCode0
Show:102550
← PrevPage 86 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified