SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 42764300 of 10817 papers

TitleStatusHype
Modular Visual Question Answering via Code GenerationCode1
Improving Vietnamese Legal Question--Answering System based on Automatic Data Enrichment0
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language ModelsCode3
Knowledge Detection by Relevant Question and Image Attributes in Visual Question Answering0
Mapping the Challenges of HCI: An Application and Evaluation of ChatGPT and GPT-4 for Mining Insights at Scale0
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark for FinanceCode2
Phrase Retrieval for Open-Domain Conversational Question Answering with Conversational Dependency Modeling via Contrastive LearningCode0
When to Read Documents or QA History: On Unified and Selective Open-domain QA0
Evaluation of ChatGPT on Biomedical Tasks: A Zero-Shot Comparison with Fine-Tuned Generative Transformers0
Enhancing In-Context Learning with Answer Feedback for Multi-Span Question AnsweringCode1
Benchmarking Foundation Models with Language-Model-as-an-Examiner0
Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering0
Gotta: Generative Few-shot Question Answering by Prompt-based Cloze Data AugmentationCode0
LogiQA 2.0—An Improved Dataset for Logical Reasoning in Natural Language UnderstandingCode0
Prompt Space Optimizing Few-shot Reasoning Success with Large Language ModelsCode0
Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!Code1
Diversifying Joint Vision-Language Tokenization Learning0
Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks0
CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language ModelsCode0
An Approach to Solving the Abstraction and Reasoning Corpus (ARC) ChallengeCode1
SamToNe: Improving Contrastive Loss for Dual Encoder Retrieval Models with Same Tower Negatives0
Do-GOOD: Towards Distribution Shift Evaluation for Pre-Trained Visual Document Understanding ModelsCode0
PokemonChat: Auditing ChatGPT for Pokémon Universe Knowledge0
Benchmarking Large Language Models on CMExam -- A Comprehensive Chinese Medical Exam DatasetCode1
Evaluation of AI Chatbots for Patient-Specific EHR Questions0
Show:102550
← PrevPage 172 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified