SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 11511200 of 1760 papers

TitleStatusHype
FriendsQA: Open-Domain Question Answering on TV Show Transcripts0
Cross-Lingual Machine Reading ComprehensionCode0
Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension0
QAInfomax: Learning Robust Question Answering System by Mutual Information MaximizationCode0
Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning0
DCMN+: Dual Co-Matching Network for Multi-choice Reading ComprehensionCode0
Discourse-Aware Semantic Self-Attention for Narrative Reading ComprehensionCode0
Interactive Language Learning by Question AnsweringCode1
SpatialNLI: A Spatial Domain Natural Language Interface to Databases Using Spatial Comprehension0
Interactive Machine Comprehension with Information Seeking AgentsCode0
Ensemble approach for natural language question answering problem0
Query-Based Named Entity Recognition0
Adversarial Domain Adaptation for Machine Reading Comprehension0
Universal Adversarial Triggers for Attacking and Analyzing NLPCode0
GeoSQA: A Benchmark for Scenario-based Question Answering in the Geography Domain at High School Level0
CFO: A Framework for Building Production NLP Systems0
Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential ReasoningCode1
Reasoning Over Paragraph Effects in Situations0
XCMRC: Evaluating Cross-lingual Machine Reading Comprehension0
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete ReasoningCode0
FlowDelta: Modeling Flow Information Gain in Reasoning for Conversational Machine ComprehensionCode0
SG-Net: Syntax-Guided Machine Reading ComprehensionCode0
Reasoning-Driven Question-Answering for Natural Language Understanding0
X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine ComprehensionCode0
Incorporating Relation Knowledge into Commonsense Reading Comprehension with Multi-task Learning0
AmazonQA: A Review-Based Question Answering TaskCode0
Dialog State Tracking: A Neural Reading Comprehension Approach0
Beyond English-Only Reading Comprehension: Experiments in Zero-Shot Multilingual Transfer for BulgarianCode0
On Understanding the Relation between Expert Annotations of Text Readability and Target Reader ComprehensionCode0
Measuring text readability with machine comprehension: a pilot study0
Assessing Back-Translation as a Corpus Generation Strategy for non-English Tasks: A Study in Reading Comprehension and Word Sense Disambiguation0
GraphFlow: Exploiting Conversation Flow with Graph Neural Networks for Conversational Machine ComprehensionCode0
RoBERTa: A Robustly Optimized BERT Pretraining ApproachCode1
MacNet: Transferring Knowledge from Machine Comprehension to Sequence-to-Sequence Models0
Tackling Graphical NLP problems with Graph Recurrent NetworksCode0
An Effective Multi-Stage Approach For Question Answering0
Neural Machine Reading Comprehension: Methods and Trends0
CALOR-QUEST : un corpus d'entra\^ et d'\'evaluation pour la compr\'ehension automatique de textes (Machine reading comprehension is a task related to Question-Answering where questions are not generic in scope but are related to a particular document)0
A Spreading Activation Framework for Tracking Conceptual Complexity of Texts0
MC\^2: Multi-perspective Convolutional Cube for Conversational Machine Reading Comprehension0
Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading ComprehensionCode0
Token-level Dynamic Self-Attention Network for Multi-Passage Reading Comprehension0
Reading Turn by Turn: Hierarchical Attention Architecture for Spoken Dialogue Comprehension0
Active Reading Comprehension: A Dataset for Learning the Question-Answer Relationship Strategy0
XQA: A Cross-lingual Open-domain Question Answering DatasetCode1
Inferential Machine Comprehension: Answering Questions by Recursively Deducing the Evidence Chain from Text0
Katecheo: A Portable and Modular System for Multi-Topic Question AnsweringCode0
Machine Reading Comprehension: a Literature Review0
EQuANt (Enhanced Question Answer Network)Code0
Be Consistent! Improving Procedural Text Comprehension using Label ConsistencyCode0
Show:102550
← PrevPage 24 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified