SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 18011850 of 10817 papers

TitleStatusHype
MedViLaM: A multimodal large language model with advanced generalizability and explainability for medical data understanding and generationCode0
See then Tell: Enhancing Key Information Extraction with Vision Grounding0
CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question AnsweringCode1
T2Vs Meet VLMs: A Scalable Multimodal Dataset for Visual Harmfulness RecognitionCode1
HealthQ: Unveiling Questioning Capabilities of LLM Chains in Healthcare Conversations0
Zero-Shot Multi-Hop Question Answering via Monte-Carlo Tree Search with Large Language Models0
TrojVLM: Backdoor Attack Against Vision Language Models0
3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models0
Revisiting the Superficial Alignment Hypothesis0
Rehearsing Answers to Probable Questions with Perspective-Taking0
AIPatient: Simulating Patients with EHRs and LLM Powered Agentic Workflow0
Exploring Language Model Generalization in Low-Resource Extractive QACode0
Charting the Future: Using Chart Question-Answering for Scalable Evaluation of LLM-Driven Data Visualizations0
Enhancing Explainability in Multimodal Large Language Models Using Ontological Context0
DisGeM: Distractor Generation for Multiple Choice Questions with Span MaskingCode0
Efficient In-Domain Question Answering for Resource-Constrained Environments0
Integrating Hierarchical Semantic into Iterative Generation Model for Entailment Tree Explanation0
Robotic Environmental State Recognition with Pre-Trained Vision-Language Models and Black-Box Optimization0
Episodic Memory Verbalization using Hierarchical Representations of Life-Long Robot Experience0
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue0
T3: A Novel Zero-shot Transfer Learning Framework Iteratively Training on an Assistant Task for a Target Task0
Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoECode1
E.T. Bench: Towards Open-Ended Event-Level Video-Language UnderstandingCode2
DARE: Diverse Visual Question Answering with Robustness Evaluation0
Enhancing Post-Hoc Attributions in Long Document Comprehension via Coarse Grained Answer Decomposition0
SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQACode0
Detecting Temporal Ambiguity in QuestionsCode0
Enhancing Temporal Sensitivity and Reasoning for Time-Sensitive Question Answering0
Unlocking Markets: A Multilingual Benchmark to Cross-Market Question AnsweringCode0
Konstruktor: A Strong Baseline for Simple Knowledge Graph Question AnsweringCode0
Lighter And Better: Towards Flexible Context Adaptation For Retrieval Augmented Generation0
Expert-level vision-language foundation model for real-world radiology and comprehensive evaluation0
A Unified Hallucination Mitigation Framework for Large Vision-Language ModelsCode0
Exploring Hint Generation Approaches in Open-Domain Question AnsweringCode1
From Pixels to Words: Leveraging Explainability in Face Recognition through Interactive Natural Language Processing0
A Zero-Shot Open-Vocabulary Pipeline for Dialogue UnderstandingCode0
60 Data Points are Sufficient to Fine-Tune LLMs for Question-Answering0
AsthmaBot: Multi-modal, Multi-Lingual Retrieval Augmented Generation For Asthma Patient Support0
MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation modelsCode1
Learning When to Retrieve, What to Rewrite, and How to Respond in Conversational QA0
Detect, Describe, Discriminate: Moving Beyond VQA for MLLM Evaluation0
GEM-RAG: Graphical Eigen Memories For Retrieval Augmented Generation0
Boosting Healthcare LLMs Through Retrieved ContextCode1
Using Similarity to Evaluate Factual Consistency in Summaries0
Towards Efficient and Robust VQA-NLE Data Generation with Large Vision-Language ModelsCode0
A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor?0
Can CLIP Count Stars? An Empirical Study on Quantity Bias in CLIP0
LINKAGE: Listwise Ranking among Varied-Quality References for Non-Factoid QA Evaluation via LLMs0
Scene-Text Grounding for Text-Based Video Question AnsweringCode1
Evaluating the Performance and Robustness of LLMs in Materials Science Q&A and Property Predictions0
Show:102550
← PrevPage 37 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified