SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 150 of 1760 papers

TitleStatusHype
Sailor: Open Language Models for South-East AsiaCode4
Knowledge Fusion of Large Language ModelsCode4
Benchmarking Large Language Models on CFLUE -- A Chinese Financial Language Understanding Evaluation DatasetCode3
Generative Data Augmentation using LLMs improves Distributional Robustness in Question AnsweringCode3
Pre-Training with Whole Word Masking for Chinese BERTCode3
Language Models are Few-Shot LearnersCode3
Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-AlignmentCode3
Scaling Rectified Flow Transformers for High-Resolution Image SynthesisCode3
Harmonizing Visual Text Comprehension and GenerationCode2
CLUE: A Chinese Language Understanding Evaluation BenchmarkCode2
MiniGPT-5: Interleaved Vision-and-Language Generation via Generative VokensCode2
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model ParallelismCode2
TextBrewer: An Open-Source Knowledge Distillation Toolkit for Natural Language ProcessingCode2
PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and DevelopmentCode2
TabPedia: Towards Comprehensive Visual Table Understanding with Concept SynergyCode2
MiniRBT: A Two-stage Distilled Small Chinese Pre-trained ModelCode2
Learning Dense Representations of Phrases at ScaleCode2
MTVQA: Benchmarking Multilingual Text-Centric Visual Question AnsweringCode2
GPT4Point: A Unified Framework for Point-Language Understanding and GenerationCode2
ST-LLM: Large Language Models Are Effective Temporal LearnersCode2
What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical ExamsCode2
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language VariantsCode2
DOCBENCH: A Benchmark for Evaluating LLM-based Document Reading SystemsCode2
DeBERTa: Decoding-enhanced BERT with Disentangled AttentionCode2
PaLM: Scaling Language Modeling with PathwaysCode2
Scaling Language Models: Methods, Analysis & Insights from Training GopherCode2
CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about NegationCode1
Context-Aware Answer Extraction in Question AnsweringCode1
ArabicaQA: A Comprehensive Dataset for Arabic Question AnsweringCode1
ConditionalQA: A Complex Reading Comprehension Dataset with Conditional AnswersCode1
AraELECTRA: Pre-Training Text Discriminators for Arabic Language UnderstandingCode1
ComQA:Compositional Question Answering via Hierarchical Graph Neural NetworksCode1
Context-faithful Prompting for Large Language ModelsCode1
CodeQA: A Question Answering Dataset for Source Code ComprehensionCode1
CoHS-CQG: Context and History Selection for Conversational Question GenerationCode1
Clinical Reading Comprehension: A Thorough Analysis of the emrQA DatasetCode1
ChroniclingAmericaQA: A Large-scale Question Answering Dataset based on Historical American Newspaper PagesCode1
CL-ReLKT: Cross-lingual Language Knowledge Transfer for Multilingual Retrieval Question AnsweringCode1
Compresso: Structured Pruning with Collaborative Prompting Learns Compact Large Language ModelsCode1
Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional UnderstandingCode1
Break It Down: A Question Understanding BenchmarkCode1
Bidirectional Machine Reading Comprehension for Aspect Sentiment Triplet ExtractionCode1
Analyzing Multi-Task Learning for Abstractive Text SummarizationCode1
Biomedical named entity recognition using BERT in the machine reading comprehension frameworkCode1
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No QuestionsCode1
An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuningCode1
AnswerQuest: A System for Generating Question-Answer Items from Multi-Paragraph DocumentsCode1
A Robustly Optimized BMRC for Aspect Sentiment Triplet ExtractionCode1
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language UnderstandingCode1
Break, Perturb, Build: Automatic Perturbation of Reasoning Paths Through Question DecompositionCode1
Show:102550
← PrevPage 1 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified