SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 31013150 of 10817 papers

TitleStatusHype
VISREAS: Complex Visual Reasoning with Unanswerable Questions0
Evaluating the Performance of ChatGPT for Spam Email Detection0
ArabianGPT: Native Arabic GPT-based Large Language Model0
Multimodal Transformer With a Low-Computational-Cost Guarantee0
Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language ModelsCode1
Biomedical Entity Linking as Multiple Choice Question AnsweringCode0
Cost-Adaptive Recourse Recommendation by Adaptive Preference Elicitation0
Faithful Temporal Question Answering over Heterogeneous Sources0
SIMPLOT: Enhancing Chart Question Answering by Distilling EssentialsCode1
Leveraging Large Language Models for Concept Graph Recovery and Question Answering in NLP EducationCode1
CommVQA: Situating Visual Question Answering in Communicative ContextsCode0
Visual Hallucinations of Multi-modal Large Language ModelsCode1
Do LLMs Implicitly Determine the Suitable Text Difficulty for Users?Code0
Does the Generator Mind its Contexts? An Analysis of Generative Model Faithfulness under Context Transfer0
Uncertainty-Aware Evaluation for Vision-Language ModelsCode1
Data Science with LLMs and Interpretable ModelsCode2
Word-Sequence Entropy: Towards Uncertainty Estimation in Free-Form Medical Question Answering Applications and Beyond0
Triad: A Framework Leveraging a Multi-Role LLM-based Agent to Solve Knowledge Base Question AnsweringCode1
ActiveRAG: Autonomously Knowledge Assimilation and Accommodation through Retrieval-Augmented AgentsCode2
Learning to Poison Large Language Models for Downstream ManipulationCode1
FanOutQA: A Multi-Hop, Multi-Document Question Answering Benchmark for Large Language ModelsCode2
Retrieval Helps or Hurts? A Deeper Dive into the Efficacy of Retrieval Augmentation to Language ModelsCode0
RefuteBench: Evaluating Refuting Instruction-Following for Large Language ModelsCode0
Towards Building Multilingual Language Model for MedicineCode3
PQA: Zero-shot Protein Question Answering for Free-form Scientific Enquiry with Large Language ModelsCode0
Self-DC: When to Reason and When to Act? Self Divide-and-Conquer for Compositional Unknown Questions0
LLMs Meet Long Video: Advancing Long Video Question Answering with An Interactive Visual Adapter in LLMs0
Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge AlignmentCode1
Exploring the Frontier of Vision-Language Models: A Survey of Current Methodologies and Future Directions0
DrBenchmark: A Large Language Understanding Evaluation Benchmark for French Biomedical DomainCode1
Question Calibration and Multi-Hop Modeling for Temporal Question Answering0
BiMediX: Bilingual Medical Mixture of Experts LLMCode1
Exploring the Impact of Table-to-Text Methods on Augmenting LLM-based Question Answering with Domain Hybrid Data0
Slot-VLM: SlowFast Slots for Video-Language Modeling0
FormulaReasoning: A Dataset for Formula-Based Numerical ReasoningCode0
FinBen: A Holistic Financial Benchmark for Large Language ModelsCode4
Modality-Aware Integration with Large Language Models for Knowledge-based Visual Question Answering0
VideoPrism: A Foundational Visual Encoder for Video Understanding0
Benchmarking Retrieval-Augmented Generation for MedicineCode4
RJUA-MedDQA: A Multimodal Benchmark for Medical Document Question Answering and Clinical Reasoning0
Training Table Question Answering via SQL Query Decomposition0
Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question?Code0
TrustScore: Reference-Free Evaluation of LLM Response TrustworthinessCode0
Tables as Texts or Images: Evaluating the Table Reasoning Ability of LLMs and MLLMs0
Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMsCode2
Cofca: A Step-Wise Counterfactual Multi-hop QA benchmark0
Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models0
BIDER: Bridging Knowledge Inconsistency for Efficient Retrieval-Augmented LLMs via Key Supporting Evidence0
Graph-Based Retriever Captures the Long Tail of Biomedical Knowledge0
MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMsCode1
Show:102550
← PrevPage 63 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified