SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 49515000 of 10817 papers

TitleStatusHype
How Generative-AI can be Effectively used in Government Chatbots0
E-ViLM: Efficient Video-Language Model via Masked Video Modeling with Semantic Vector-Quantized Tokenizer0
The curse of language biases in remote sensing VQA: the role of spatial attributes, language diversity, and the need for clear evaluation0
Fully Authentic Visual Question Answering Dataset from Online CommunitiesCode0
Characterizing Video Question Answering with Sparsified Inputs0
Releasing the CRaQAn (Coreference Resolution in Question-Answering): An open-source dataset and dataset creation methodology using instruction-following models0
A Comparative and Experimental Study on Automatic Question Answering Systems and its Robustness against Word Jumbling0
Optimizing and Fine-tuning Large Language Model for Urban Renewal0
Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges0
Uncertainty-aware Language Modeling for Selective Question Answering0
Local Convergence of Approximate Newton Method for Two Layer Nonlinear Regression0
See and Think: Embodied Agent in Virtual Environment0
Walking a Tightrope -- Evaluating Large Language Models in High-Risk Domains0
GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation0
Question Answering in Natural Language: the Special Case of Temporal Expressions0
Lego: Learning to Disentangle and Invert Personalized Concepts Beyond Object Appearance in Text-to-Image Diffusion Models0
Vamos: Versatile Action Models for Video UnderstandingCode0
Drilling Down into the Discourse Structure with LLMs for Long Document Question Answering0
AlignedCoT: Prompting Large Language Models via Native-Speaking DemonstrationsCode0
AcademicGPT: Empowering Academic Research0
Do Smaller Language Models Answer Contextualised Questions Through Memorisation Or Generalisation?0
ATLANTIC: Structure-Aware Retrieval-Augmented Language Model for Interdisciplinary Science0
Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask QuestionsCode0
Unifying Corroborative and Contributive Attributions in Large Language Models0
Towards Robust Text Retrieval with Progressive LearningCode0
Zero-Shot Question Answering over Financial Documents using Large Language Models0
LLM aided semi-supervision for Extractive Dialog Summarization0
Journey of Hallucination-minimized Generative AI Solutions for Financial Decision Makers0
Orca 2: Teaching Small Language Models How to Reason0
PEFT-MedAware: Large Language Model for Medical Awareness0
Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs0
StorySparkQA: Expert-Annotated QA Pairs with Real-World Knowledge for Children's Story-Based LearningCode0
Downstream Trade-offs of a Family of Text WatermarksCode0
Investigating Data Contamination in Modern Benchmarks for Large Language Models0
Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction TuningCode0
Leveraging LLMs in Scholarly Knowledge Graph Question AnsweringCode0
Online Continual Knowledge Learning for Language Models0
What if you said that differently?: How Explanation Formats Affect Human Feedback Efficacy and User PerceptionCode0
SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLUCode0
Crafting In-context Examples according to LMs' Parametric KnowledgeCode0
On Evaluating the Integration of Reasoning and Action in LLM Agents with Database Question Answering0
Pregnant Questions: The Importance of Pragmatic Awareness in Maternal Health Question Answering0
You don't need a personality test to know these models are unreliable: Assessing the Reliability of Large Language Models on Psychometric InstrumentsCode0
Graph Elicitation for Guiding Multi-Step Reasoning in Large Language Models0
On the Calibration of Multilingual Question Answering LLMs0
Towards Verifiable Text Generation with Symbolic References0
X-Eval: Generalizable Multi-aspect Text Evaluation via Augmented Instruction Tuning with Auxiliary Evaluation Aspects0
LLMRefine: Pinpointing and Refining Large Language Models via Fine-Grained Actionable Feedback0
Long-form Question Answering: An Iterative Planning-Retrieval-Generation Approach0
Transformers in the Service of Description Logic-based ContextsCode0
Show:102550
← PrevPage 100 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified