SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 801850 of 10817 papers

TitleStatusHype
Learning Trimodal Relation for AVQA with Missing ModalityCode1
Enhancing LLM's Cognition via StructurizationCode1
HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal ReasoningCode1
Evaluating language models as risk scoresCode1
Visual Haystacks: A Vision-Centric Needle-In-A-Haystack BenchmarkCode1
TurkishMMLU: Measuring Massive Multitask Language Understanding in TurkishCode1
Video-Language Alignment via Spatio-Temporal Graph TransformerCode1
MixGR: Enhancing Retriever Generalization for Scientific Domain through Complementary GranularityCode1
Graphusion: Leveraging Large Language Models for Scientific Knowledge Graph Fusion and Construction in NLP EducationCode1
Lost and Found: Overcoming Detector Failures in Online Multi-Object TrackingCode1
IoT-LM: Large Multisensory Language Models for the Internet of ThingsCode1
CompAct: Compressing Retrieved Documents Actively for Question AnsweringCode1
Model Surgery: Modulating LLM's Behavior Via Simple Parameter EditingCode1
AutoBencher: Creating Salient, Novel, Difficult Datasets for Language ModelsCode1
IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language ModelCode1
3D Vision and Language Pretraining with Large-Scale Synthetic DataCode1
Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMsCode1
Referring Atomic Video Action RecognitionCode1
LogEval: A Comprehensive Benchmark Suite for Large Language Models In Log AnalysisCode1
Increasing Model Capacity for Free: A Simple Strategy for Parameter Efficient Fine-tuningCode1
Eliminating Position Bias of Language Models: A Mechanistic ApproachCode1
CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding EvaluationCode1
PolygonGNN: Representation Learning for Polygonal Geometries with Heterogeneous Visibility GraphCode1
H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on TablesCode1
STLLaVA-Med: Self-Training Large Language and Vision Assistant for Medical Question-AnsweringCode1
The SIFo Benchmark: Investigating the Sequential Instruction Following Ability of Large Language ModelsCode1
MM-Instruct: Generated Visual Instructions for Large Multimodal Model AlignmentCode1
SeaKR: Self-aware Knowledge Retrieval for Adaptive Retrieval Augmented GenerationCode1
Knowledge graph enhanced retrieval-augmented generation for failure mode and effects analysisCode1
CogMG: Collaborative Augmentation Between Large Language Model and Knowledge GraphCode1
DEXTER: A Benchmark for open-domain Complex Question Answering using LLMsCode1
LLMs Assist NLP Researchers: Critique Paper (Meta-)ReviewingCode1
HCQA @ Ego4D EgoSchema Challenge 2024Code1
UDA: A Benchmark Suite for Retrieval Augmented Generation in Real-world Document AnalysisCode1
Learning to Plan for Retrieval-Augmented Large Language Models from Knowledge GraphsCode1
Timo: Towards Better Temporal Reasoning for Language ModelsCode1
SuperGLEBer: German Language Understanding Evaluation BenchmarkCode1
LLaSA: A Multimodal LLM for Human Activity Analysis Through Wearable and Smartphone SensorsCode1
AlanaVLM: A Multimodal Embodied AI Foundation Model for Egocentric Video UnderstandingCode1
MoreHopQA: More Than Multi-hop ReasoningCode1
DialSim: A Real-Time Simulator for Evaluating Long-Term Multi-Party Dialogue Understanding of Conversational AgentsCode1
Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented GenerationCode1
LIVE: Learnable In-Context Vector for Visual Question AnsweringCode1
Factual Confidence of LLMs: on Reliability and Robustness of Current EstimatorsCode1
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive PrinciplesCode1
TRACE the Evidence: Constructing Knowledge-Grounded Reasoning Chains for Retrieval-Augmented GenerationCode1
Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and ActivationsCode1
MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language ModelsCode1
MMNeuron: Discovering Neuron-Level Domain-Specific Interpretation in Multimodal Large Language ModelCode1
Soft Prompting for Unlearning in Large Language ModelsCode1
Show:102550
← PrevPage 17 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified