SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 2650 of 1760 papers

TitleStatusHype
PaLM: Scaling Language Modeling with PathwaysCode2
A Robustly Optimized BMRC for Aspect Sentiment Triplet ExtractionCode1
Context-Aware Answer Extraction in Question AnsweringCode1
CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about NegationCode1
A Self-Training Method for Machine Reading Comprehension with Soft Evidence ExtractionCode1
A Sentence Cloze Dataset for Chinese Machine Reading ComprehensionCode1
ConditionalQA: A Complex Reading Comprehension Dataset with Conditional AnswersCode1
Context-faithful Prompting for Large Language ModelsCode1
CoHS-CQG: Context and History Selection for Conversational Question GenerationCode1
ArabicaQA: A Comprehensive Dataset for Arabic Question AnsweringCode1
Compresso: Structured Pruning with Collaborative Prompting Learns Compact Large Language ModelsCode1
CL-ReLKT: Cross-lingual Language Knowledge Transfer for Multilingual Retrieval Question AnsweringCode1
Clinical Reading Comprehension: A Thorough Analysis of the emrQA DatasetCode1
CodeQA: A Question Answering Dataset for Source Code ComprehensionCode1
ComQA:Compositional Question Answering via Hierarchical Graph Neural NetworksCode1
Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional UnderstandingCode1
Break It Down: A Question Understanding BenchmarkCode1
Can large language models reason about medical questions?Code1
An MRC Framework for Semantic Role LabelingCode1
An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuningCode1
Break, Perturb, Build: Automatic Perturbation of Reasoning Paths Through Question DecompositionCode1
AraELECTRA: Pre-Training Text Discriminators for Arabic Language UnderstandingCode1
AnswerQuest: A System for Generating Question-Answer Items from Multi-Paragraph DocumentsCode1
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No QuestionsCode1
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin InformationCode1
Show:102550
← PrevPage 2 of 71Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified