SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 42214230 of 10817 papers

TitleStatusHype
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications0
Distantly Supervised Transformers For E-Commerce Product QA0
Comparison of Open-Source and Proprietary LLMs for Machine Reading Comprehension: A Practical Analysis for Industrial Applications0
An Improved Traditional Chinese Evaluation Suite for Foundation Model0
How You Ask Matters: The Effect of Paraphrastic Questions to BERT Performance on a Clinical SQuAD Dataset0
Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment0
ImF: Implicit Fingerprint for Large Language Models0
DISLOG: A logic-based language for processing discourse structures0
ADVISE: Symbolism and External Knowledge for Decoding Advertisements0
Benchmarking Multimodal LLMs on Recognition and Understanding over Chemical Tables0
Show:102550
← PrevPage 423 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified