SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 74767500 of 10817 papers

TitleStatusHype
A Structured Distributional Semantic Model : Integrating Structure with Semantics0
Amharic Question Answering for Biography, Definition, and Description Questions0
Post-training an LLM for RAG? Train on Self-Generated Demonstrations0
GRILLBot: A multi-modal conversational agent for complex real-world tasks0
Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models0
Power in Numbers: Robust reading comprehension by finetuning with four adversarial sentences per example0
Poze: Sports Technique Feedback under Data Constraints0
Comprehensive Study on German Language Models for Clinical and Biomedical Text Understanding0
Extractive Question Answering on Queries in Hindi and Tamil0
ChartCitor: Multi-Agent Framework for Fine-Grained Chart Visual Attribution0
PQuAD: A Persian Question Answering Dataset0
Grid Search Hyperparameter Benchmarking of BERT, ALBERT, and LongFormer on DuoRC0
Grid-LOGAT: Grid Based Local and Global Area Transcription for Video Question Answering0
Practical Semantic Parsing for Spoken Language Understanding0
Practice in Synonym Extraction at Large Scale0
Extrinsic Evaluation of Machine Translation Metrics0
PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization0
Comprehensive Modeling and Question Answering of Cancer Clinical Practice Guidelines using LLMs0
A Structured Distributional Semantic Model for Event Co-reference0
Precise Length Control in Large Language Models0
Precise Model Benchmarking with Only a Few Observations0
Precision Empowers, Excess Distracts: Visual Question Answering With Dynamically Infused Knowledge In Language Models0
Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute0
A dataset of clinically generated visual questions and answers about radiology images0
Proceedings of the Workshop on Human-Computer Question Answering0
Show:102550
← PrevPage 300 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified