SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 651700 of 1760 papers

TitleStatusHype
Multilingual Multi-Aspect Explainability Analyses on Machine Reading Comprehension ModelsCode0
EviDR: Evidence-Emphasized Discrete Reasoning for Reasoning Machine Reading ComprehensionCode0
A New Entity Extraction Method Based on Machine Reading Comprehension0
How Optimal is Greedy Decoding for Extractive Question Answering?Code1
An Intelligent Recommendation-cum-Reminder System0
BERT-based distractor generation for Swedish reading comprehension questions using a small-scale datasetCode0
Decoupled Transformer for Scalable Inference in Open-domain Question Answering0
Towards a Better Understanding Human Reading Comprehension with Brain SignalsCode0
From LSAT: The Progress and Challenges of Complex ReasoningCode1
Benchmarking: Past, Present and FutureCode1
Multi-Strategy Knowledge Distillation Based Teacher-Student Framework for Machine Reading Comprehension0
面向机器阅读理解的高质量藏语数据集构建(Construction of High-quality Tibetan Dataset for Machine Reading Comprehension)0
Topic Knowledge Acquisition and Utilization for Machine Reading Comprehension in Social Media Domain0
基于小句复合体的中文机器阅读理解研究(Machine Reading Comprehension Based on Clause Complex)0
A Chinese Machine Reading Comprehension Dataset Automatic Generated Based on Knowledge Graph0
基于篇章结构攻击的阅读理解任务探究(Analysis of Reading Comprehension Tasks based on passage structure attacks)0
基于阅读理解的汉越跨语言新闻事件要素抽取方法(News Events Element Extraction of Chinese-Vietnamese Cross-language Using Reading Comprehension)0
Ti-Reader: 基于注意力机制的藏文机器阅读理解端到端网络模型(Ti-Reader: An End-to-End Network Model Based on Attention Mechanisms for Tibetan Machine Reading Comprehension)0
Incorporating Compositionality and Morphology into End-to-End Models0
Leveraging Type Descriptions for Zero-shot Named Entity Recognition and Classification0
Addressing Semantic Drift in Generative Question Answering with Auxiliary Extraction0
Towards a more Robust Evaluation for Conversational Question Answering0
Learning Event Graph Knowledge for Abductive ReasoningCode1
DuReader\_robust: A Chinese Dataset Towards Evaluating Robustness and Generalization of Machine Reading Comprehension in Real-World Applications0
Stanford MLab at SemEval-2021 Task 1: Tree-Based Modelling of Lexical Complexity using Word Embeddings0
Noobs at Semeval-2021 Task 4: Masked Language Modeling for abstract answer prediction0
DeepBlueAI at SemEval-2021 Task 1: Lexical Complexity Prediction with A Deep Ensemble Approach0
UoR at SemEval-2021 Task 4: Using Pre-trained BERT Token Embeddings for Question Answering of Abstract Meaning0
TA-MAMC at SemEval-2021 Task 4: Task-adaptive Pretraining and Multi-head Attention for Abstract Meaning Reading Comprehension0
BiQuAD: Towards QA based on deeper text understanding0
PINGAN Omini-Sinitic at SemEval-2021 Task 4:Reading Comprehension of Abstract Meaning0
ECNU\_ICA\_1 SemEval-2021 Task 4: Leveraging Knowledge-enhanced Graph Attention Networks for Reading Comprehension of Abstract Meaning0
Attention-based Aspect Reasoning for Knowledge Base Question Answering on Clinical Notes0
Break, Perturb, Build: Automatic Perturbation of Reasoning Paths Through Question DecompositionCode1
QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension0
Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning0
Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy0
Sequence Model with Self-Adaptive Sliding Window for Efficient Spoken Document Segmentation0
Bridging the Gap between Language Model and Reading Comprehension: Unsupervised MRC via Self-Supervision0
Automatic Task Requirements Writing Evaluation via Machine Reading ComprehensionCode0
Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning SkillsCode1
FewCLUE: A Chinese Few-shot Learning Evaluation BenchmarkCode1
Human Attention during Goal-directed Reading Comprehension Relies on Task OptimizationCode0
Improving Low-resource Reading Comprehension via Cross-lingual Transposition Rethinking0
An Initial Investigation of Non-Native Spoken Question-Answering0
Keep it Simple: Unsupervised Simplification of Multi-Paragraph TextCode1
Audio-Oriented Multimodal Machine Comprehension: Task, Dataset and Model0
ClueReader: Heterogeneous Graph Attention Network for Multi-hop Machine Reading Comprehension0
What Makes a Concept Complex? Measuring Conceptual Complexity as a Precursor for Text Simplification0
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language UnderstandingCode1
Show:102550
← PrevPage 14 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified