SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 1030110350 of 10817 papers

TitleStatusHype
Syntactically Aware Neural Architectures for Definition Extraction0
Syntactic Dependencies and Distributed Word Representations for Analogy Detection and Mining0
Syntactic Parsing of Web Queries with Question Intent0
Syntactic Semantic Correspondence in Dependency Grammar0
Syntax-based Deep Matching of Short Texts0
Syntax-informed Question Answering with Heterogeneous Graph Transformer0
Syntax Tree Constrained Graph Network for Visual Question Answering0
Synthesize Step-by-Step: Tools, Templates and LLMs as Data Generators for Reasoning-Based Chart VQA0
Synthesize Step-by-Step: Tools Templates and LLMs as Data Generators for Reasoning-Based Chart VQA0
Synthesizing Conversations from Unlabeled Documents using Automatic Response Segmentation0
Synthetic Clarification and Correction Dialogues about Data-Centric Tasks -- A Teacher-Student Approach0
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering0
Synthetic Data Generation for Multilingual Domain-Adaptable Question Answering Systems0
Synthetic Data Generation & Multi-Step RL for Reasoning & Tool Use0
Synthetic Data Generation Using Large Language Models: Advances in Text and Code0
Synthetic Function Demonstrations Improve Generation in Low-Resource Programming Languages0
Synthetic Multimodal Question Generation0
Synthetic Question Value Estimation for Domain Adaptation of Question Answering0
Synthetic Target Domain Supervision for Open Retrieval QA0
Systematic Assessment of Factual Knowledge in Large Language Models0
Systematic Error Analysis of the Stanford Question Answering Dataset0
Systems' Agreements and Disagreements in Temporal Processing: An Extensive Error Analysis of the TempEval-3 Task0
T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts0
T3: A Novel Zero-shot Transfer Learning Framework Iteratively Training on an Assistant Task for a Target Task0
TABi: Type-Aware Bi-encoders for End-to-End Entity Retrieval0
TableBench: A Comprehensive and Complex Benchmark for Table Question Answering0
TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT0
TableQAKit: A Comprehensive and Practical Toolkit for Table-based Question Answering0
TableQA: Question Answering on Tabular Data0
Table-R1: Region-based Reinforcement Learning for Table Understanding0
Table Retrieval Does Not Necessitate Table-specific Model Design0
Tables as Texts or Images: Evaluating the Table Reasoning Ability of LLMs and MLLMs0
Tables as Semi-structured Knowledge for Question Answering0
TabMCQ: A Dataset of General Knowledge Tables and Multiple-choice Questions0
TabSD: Large Free-Form Table Question Answering with SQL-Based Table Decomposition0
Tabular-TX: Theme-Explanation Structure-based Table Summarization via In-Context Learning0
Tackling Adversarial Examples in QA via Answer Sentence Selection0
Tackling Biomedical Text Summarization: OAQA at BioASQ 5B0
Tackling Code-Switched NER: Participation of CMU0
Tackling VQA with Pretrained Foundation Models without Further Training0
TACO-RL: Task Aware Prompt Compression Optimization with Reinforcement Learning0
TACR: A Table-alignment-based Cell-selection and Reasoning Model for Hybrid Question-Answering0
Take A Step Back: Rethinking the Two Stages in Visual Reasoning0
TakeLab-QA at SemEval-2017 Task 3: Classification Experiments for Answer Retrieval in Community QA0
Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded0
Taking Notes Brings Focus? Towards Multi-Turn Multimodal Dialogue Learning0
TALE: A Tool-Augmented Framework for Reference-Free Evaluation of Large Language Models0
Talking to GDELT Through Knowledge Graphs0
Talking to the brain: Using Large Language Models as Proxies to Model Brain Semantic Representation0
Talk to Papers: Bringing Neural Question Answering to Academic Search0
Show:102550
← PrevPage 207 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified