SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 47014750 of 10817 papers

TitleStatusHype
Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering0
Prompt-based Personalized Federated Learning for Medical Visual Question Answering0
LAPDoc: Layout-Aware Prompting for Documents0
A Dataset of Open-Domain Question Answering with Multiple-Span Answers0
Pretraining Vision-Language Model for Difference Visual Question Answering in Longitudinal Chest X-raysCode0
Reasoning over Uncertain Text by Generative Large Language ModelsCode0
Multi-Query Focused Disaster Summarization via Instruction-Based Prompting0
Learning How To Ask: Cycle-Consistency Refines Prompts in Multimodal Foundation Models0
Towards Faithful and Robust LLM Specialists for Evidence-Based Question-AnsweringCode0
Visual Question Answering Instruction: Unlocking Multimodal Large Language Model To Domain-Specific Visual Multitasks0
Plausible Extractive Rationalization through Semi-Supervised Entailment SignalCode0
Visually Dehallucinative Instruction GenerationCode0
T-RAG: Lessons from the LLM Trenches0
PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs0
Lumos : Empowering Multimodal LLMs with Scene Text Recognition0
BDIQA: A New Dataset for Video Question Answering to Explore Cognitive Reasoning through Theory of Mind0
Synthesizing Sentiment-Controlled Feedback For Multimodal Text and Image DataCode0
Retrieval Augmented Thought Process for Private Data Handling in Healthcare0
CPSDBench: A Large Language Model Evaluation Benchmark and Baseline for Chinese Public Security Domain0
FaBERT: Pre-training BERT on Persian Blogs0
EntGPT: Linking Generative Large Language Models with Knowledge BasesCode0
The Generative AI Paradox on Evaluation: What It Can Solve, It May Not Evaluate0
CIC: A Framework for Culturally-Aware Image Captioning0
FAQ-Gen: An automated system to generate domain-specific FAQs to aid content comprehension0
In-Context Principle Learning from MistakesCode0
Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel ImagesCode0
Efficient Models for the Detection of Hate, Abuse and Profanity0
SubGen: Token Generation in Sublinear Time and Memory0
NORMY: Non-Uniform History Modeling for Open Retrieval Conversational Question AnsweringCode0
VerAs: Verify then Assess STEM Lab ReportsCode0
Are Machines Better at Complex Reasoning? Unveiling Human-Machine Inference Gaps in Entailment Verification0
Empowering Language Models with Active Inquiry for Deeper Understanding0
SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark0
Convincing Rationales for Visual Question Answering ReasoningCode0
Enhancing textual textbook question answering with large language models and retrieval augmented generationCode0
LB-KBQA: Large-language-model and BERT based Knowledge-Based Question and Answering System0
A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications0
eXplainable Bayesian Multi-Perspective Generative Retrieval0
Large Language Model for Table Processing: A Survey0
Knowledge Generation for Zero-shot Knowledge-based VQACode0
PuzzleBench: Can LLMs Solve Challenging First-Order Combinatorial Reasoning Problems?0
SemPool: Simple, robust, and interpretable KG pooling for enhancing language models0
Efficient Prompt Caching via Embedding Similarity0
LLMs May Perform MCQA by Selecting the Least Incorrect Option0
BAT: Learning to Reason about Spatial Sounds with Large Language Models0
SPARQL Generation with Entity Pre-trained GPT for KG Question AnsweringCode0
An Exam-based Evaluation Approach Beyond Traditional Relevance Judgments0
A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains0
Instruction Makes a DifferenceCode0
HiQA: A Hierarchical Contextual Augmentation RAG for Multi-Documents QA0
Show:102550
← PrevPage 95 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified