SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 110 of 10817 papers

TitleStatusHype
Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering0
Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It0
City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning0
From Roots to Rewards: Dynamic Tree Reasoning with RLCode0
Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility0
Describe Anything Model for Visual Question Answering on Text-rich ImagesCode1
Warehouse Spatial Question Answering with LLM AgentCode1
Barriers in Integrating Medical Visual Question Answering into Radiology Workflows: A Scoping Review and Clinicians' Insights0
MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning0
LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation0
Show:102550
← PrevPage 1 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LUKE 483MF195.4Unverified
2BART (TextBox 2.0)F193.04Unverified
3BERT-LARGE (Single+TriviaQA)F191.8Unverified
4BERT-Large 32k batch size with AdamWF191.58Unverified
5DyREXF191.01Unverified
6{ANNA} (single model)EM90.62Unverified
7LUKE (single model)EM90.2Unverified
8LUKEEM90.2Unverified
9XLNet (single model)EM89.9Unverified
10XLNET-123++ (single model)EM89.86Unverified