SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 11011125 of 1760 papers

TitleStatusHype
JEC-QA: A Legal-Domain Question Answering Dataset0
Evaluating Commonsense in Pre-trained Language ModelsCode0
Label Dependent Deep Variational Paraphrase Generation0
Unsupervised Domain Adaptation of Language Models for Reading Comprehension0
Temporal Reasoning via Audio Question AnsweringCode0
Assessing the Benchmarking Capacity of Machine Reading Comprehension Datasets0
Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension0
Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering0
Robust Reading Comprehension with Linguistic Constraints via Posterior Regularization0
Contextual Recurrent Units for Cloze-style Reading Comprehension0
Unsupervised Domain Adaptation on Reading ComprehensionCode0
Meta Answering for Machine Reading0
Improving Machine Reading Comprehension via Adversarial Training0
An Annotation Scheme of A Large-scale Multi-party Dialogues Dataset for Discourse Parsing and Machine Comprehension0
Ask to Learn: A Study on Curiosity-driven Question Generation0
Dice Loss for Data-imbalanced NLP TasksCode0
How to Pre-Train Your Model? Comparison of Different Pre-Training Models for Biomedical Question Answering0
Design and Challenges of Cloze-Style Reading Comprehension Tasks on Multiparty Dialogue0
Proceedings of the 2nd Workshop on Machine Reading for Question Answering0
Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension0
Machine Comprehension Improves Domain-Specific Japanese Predicate-Argument Structure Analysis0
Commonsense Inference in Natural Language Processing (COIN) - Shared Task Report0
Improving Pre-Trained Multilingual Model with Vocabulary Expansion0
Improving the Robustness of Deep Reading Comprehension Models by Leveraging Syntax Prior0
D-NET: A Pre-Training and Fine-Tuning Framework for Improving the Generalization of Machine Reading ComprehensionCode0
Show:102550
← PrevPage 45 of 71Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified