SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 851875 of 10817 papers

TitleStatusHype
MQADet: A Plug-and-Play Paradigm for Enhancing Open-Vocabulary Object Detection via Multimodal Question Answering0
Tracking the Copyright of Large Vision-Language Models through Parameter Learning Adversarial Images0
Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models0
EPERM: An Evidence Path Enhanced Reasoning Model for Knowledge Graph Question and Answering0
Echo: A Large Language Model with Temporal Episodic Memory0
Wrong Answers Can Also Be Useful: PlausibleQA -- A Large-Scale QA Dataset with Answer Plausibility ScoresCode0
Empowering LLMs with Logical Reasoning: A Comprehensive Survey0
MHQA: A Diverse, Knowledge Intensive Mental Health Question Answering Challenge for Language Models0
TransMamba: Fast Universal Architecture Adaption from Transformers to Mamba0
Chats-Grid: An Iterative Retrieval Q&A Optimization Scheme Leveraging Large Model and Retrieval Enhancement Generation in smart grid0
Improving Consistency in Large Language Models through Chain of GuidanceCode0
Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?0
KVLink: Accelerating Large Language Models via Efficient KV Cache ReuseCode1
Mind the Gap! Static and Interactive Evaluations of Large Audio Models0
Directional Gradient Projection for Robust Fine-Tuning of Foundation Models0
Is Relevance Propagated from Retriever to Generator in RAG?0
Benchmarking Multimodal RAG through a Chart-based Document Question-Answering Generation FrameworkCode0
Measuring Faithfulness of Chains of Thought by Unlearning Reasoning StepsCode1
Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision Language ModelsCode2
How to Get Your LLM to Generate Challenging Problems for EvaluationCode1
On the Influence of Context Size and Model Choice in Retrieval-Augmented Generation SystemsCode0
Exploring Advanced Techniques for Visual Question Answering: A Comprehensive Comparison0
EpMAN: Episodic Memory AttentioN for Generalizing to Longer Contexts0
NLP-AKG: Few-Shot Construction of NLP Academic Knowledge Graph Based on LLM0
Argument-Based Comparative Question Answering Evaluation Benchmark0
Show:102550
← PrevPage 35 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified