SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 12011250 of 1760 papers

TitleStatusHype
XCMRC: Evaluating Cross-lingual Machine Reading Comprehension0
X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine ComprehensionCode0
SG-Net: Syntax-Guided Machine Reading ComprehensionCode0
Reasoning-Driven Question-Answering for Natural Language Understanding0
FlowDelta: Modeling Flow Information Gain in Reasoning for Conversational Machine ComprehensionCode0
Incorporating Relation Knowledge into Commonsense Reading Comprehension with Multi-task Learning0
AmazonQA: A Review-Based Question Answering TaskCode0
Dialog State Tracking: A Neural Reading Comprehension Approach0
Beyond English-Only Reading Comprehension: Experiments in Zero-Shot Multilingual Transfer for BulgarianCode0
On Understanding the Relation between Expert Annotations of Text Readability and Target Reader ComprehensionCode0
Assessing Back-Translation as a Corpus Generation Strategy for non-English Tasks: A Study in Reading Comprehension and Word Sense Disambiguation0
Measuring text readability with machine comprehension: a pilot study0
GraphFlow: Exploiting Conversation Flow with Graph Neural Networks for Conversational Machine ComprehensionCode0
MacNet: Transferring Knowledge from Machine Comprehension to Sequence-to-Sequence Models0
Tackling Graphical NLP problems with Graph Recurrent NetworksCode0
An Effective Multi-Stage Approach For Question Answering0
Neural Machine Reading Comprehension: Methods and Trends0
Reading Turn by Turn: Hierarchical Attention Architecture for Spoken Dialogue Comprehension0
MC\^2: Multi-perspective Convolutional Cube for Conversational Machine Reading Comprehension0
CALOR-QUEST : un corpus d'entra\^ et d'\'evaluation pour la compr\'ehension automatique de textes (Machine reading comprehension is a task related to Question-Answering where questions are not generic in scope but are related to a particular document)0
Inferential Machine Comprehension: Answering Questions by Recursively Deducing the Evidence Chain from Text0
A Spreading Activation Framework for Tracking Conceptual Complexity of Texts0
Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading ComprehensionCode0
Katecheo: A Portable and Modular System for Multi-Topic Question AnsweringCode0
Active Reading Comprehension: A Dataset for Learning the Question-Answer Relationship Strategy0
Token-level Dynamic Self-Attention Network for Multi-Passage Reading Comprehension0
Machine Reading Comprehension: a Literature Review0
EQuANt (Enhanced Question Answer Network)Code0
Be Consistent! Improving Procedural Text Comprehension using Label ConsistencyCode0
Automatic learner summary assessment for reading comprehension0
Structured Pruning of Recurrent Neural Networks through Neuron Selection0
Augmenting Neural Networks with First-order LogicCode0
Learning to Ask Unanswerable Questions for Machine Reading Comprehension0
Neural Arabic Question AnsweringCode0
Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading ComprehensionCode0
Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading ComprehensionCode0
A Survey on Neural Machine Reading Comprehension0
Multi-hop Reading Comprehension through Question Decomposition and RescoringCode0
RankQA: Neural Question Answering with Answer Re-RankingCode0
Compositional Questions Do Not Necessitate Multi-hop ReasoningCode0
Generating Question-Answer HierarchiesCode0
Conversing by Reading: Contentful Neural Conversation with On-demand Machine ReadingCode0
ChID: A Large-scale Chinese IDiom Dataset for Cloze TestCode0
Question Answering as an Automatic Evaluation Metric for News Article SummarizationCode0
Document-Level N-ary Relation Extraction with Multiscale Representation Learning0
Enhancing Key-Value Memory Neural Networks for Knowledge Based Question Answering0
Yimmon at SemEval-2019 Task 9: Suggestion Mining with Hybrid Augmented Approaches0
Online Distilling from Checkpoints for Neural Machine Translation0
Eidos, INDRA, \& Delphi: From Free Text to Executable Causal ModelsCode0
Is It Dish Washer Safe? Automatically Answering ``Yes/No'' Questions Using Customer Reviews0
Show:102550
← PrevPage 25 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified