SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 1030110325 of 10817 papers

TitleStatusHype
Do LLMs Understand Your Translations? Evaluating Paragraph-level MT with Question AnsweringCode0
A Technical Question Answering System with Transfer LearningCode0
Do LLMs Implicitly Determine the Suitable Text Difficulty for Users?Code0
MedG-KRP: Medical Graph Knowledge Representation ProbingCode0
On the Influence of Context Size and Model Choice in Retrieval-Augmented Generation SystemsCode0
Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image RepresentationsCode0
Prosody Modifications for Question-Answering in Voice-Only SettingsCode0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
On the Multilingual Capabilities of Very Large-Scale English Language ModelsCode0
Answering Naturally: Factoid to Full length Answer GenerationCode0
Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress?Code0
Protecting multimodal large language models against misleading visualizationsCode0
CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense Question AnsweringCode0
Medical Large Vision Language Models with Multi-Image Visual AbilityCode0
Medical Question Summarization with Entity-driven Contrastive LearningCode0
Medical Question Understanding and Answering with Knowledge Grounding and Semantic Self-SupervisionCode0
On the Robustness of Dialogue History Representation in Conversational Question Answering: A Comprehensive Study and a New Prompt-based MethodCode0
A Survey on Recent Advances in Named Entity Recognition from Deep Learning modelsCode0
On the Robustness of Question Rewriting Systems to Questions of Varying HardnessCode0
A Survey on Deep Learning for Named Entity RecognitionCode0
On the Structural Memory of LLM AgentsCode0
Do-GOOD: Towards Distribution Shift Evaluation for Pre-Trained Visual Document Understanding ModelsCode0
On the Summarization of Consumer Health QuestionsCode0
MediFact at MEDIQA-CORR 2024: Why AI Needs a Human TouchCode0
MediFact at MEDIQA-M3G 2024: Medical Question Answering in Dermatology with Multimodal LearningCode0
Show:102550
← PrevPage 413 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified