SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 776800 of 10817 papers

TitleStatusHype
Task-Agnostic Attacks Against Vision Foundation ModelsCode0
FANS -- Formal Answer Selection for Natural Language Math Reasoning Using Lean40
Enhancing Vietnamese VQA through Curriculum Learning on Raw and Augmented Text RepresentationsCode0
Structured Outputs Enable General-Purpose LLMs to be Medical Experts0
Towards Understanding Multi-Round Large Language Model Reasoning: Approximability, Learnability and Generalizability0
AttackSeqBench: Benchmarking Large Language Models' Understanding of Sequential Patterns in Cyber AttacksCode0
DSPNet: Dual-vision Scene Perception for Robust 3D Question AnsweringCode1
Addressing Overprescribing Challenges: Fine-Tuning Large Language Models for Medication Recommendation TasksCode0
OWLViz: An Open-World Benchmark for Visual Question Answering0
Towards Robust Expert Finding in Community Question Answering PlatformsCode0
Zero-Shot Complex Question-Answering on Long Scientific DocumentsCode0
EchoQA: A Large Collection of Instruction Tuning Data for Echocardiogram Reports0
BioD2C: A Dual-level Semantic Consistency Constraint Framework for Biomedical VQACode0
Optimizing open-domain question answering with graph-based retrieval augmented generation0
SAGE: A Framework of Precise Retrieval for RAG0
Beyond Prompting: An Efficient Embedding Framework for Open-Domain Question Answering0
Q-NL Verifier: Leveraging Synthetic Data for Robust Knowledge Graph Question AnsweringCode0
Causal Tree Extraction from Medical Case Reports: A Novel Task for Experts-like Text Comprehension0
When an LLM is apprehensive about its answers -- and when its uncertainty is justifiedCode0
Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language ModelsCode0
Parameter-free Video Segmentation for Vision and Language Understanding0
Generate, Discriminate, Evolve: Enhancing Context Faithfulness via Fine-Grained Sentence-Level Self-Evolution0
SRAG: Structured Retrieval-Augmented Generation for Multi-Entity Question Answering over Wikipedia Graph0
ER-RAG: Enhance RAG with ER-Based Unified Modeling of Heterogeneous Data Sources0
Optimizing Multi-Hop Document Retrieval Through Intermediate Representations0
Show:102550
← PrevPage 32 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified