SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 751800 of 1760 papers

TitleStatusHype
End-to-End QA on COVID-19: Domain Adaptation with Synthetic Training0
A Participatory Strategy for AI Ethics in Education and Rehabilitation grounded in the Capability Approach0
Graph-combined Coreference Resolution Methods on Conversational Machine Reading Comprehension with Pre-trained Language Model0
Comparative Analysis of Neural QA models on SQuAD0
Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy0
Graphical Schemes May Improve Readability but Not Understandability for People with Dyslexia0
Graph Sequential Network for Reasoning over Sequences0
Grounding Gradable Adjectives through Crowdsourcing0
Complex Factoid Question Answering with a Free-Text Knowledge Graph0
A Survey on Explainability in Machine Reading Comprehension0
Adapting Large Language Models to Domains via Reading Comprehension0
End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension0
HAS-QA: Hierarchical Answer Spans Model for Open-domain Question Answering0
Complex Word Identification Based on Frequency in a Learner Corpus0
Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks0
Have You Seen That Number? Investigating Extrapolation in Question Answering Models0
ClueReader: Heterogeneous Graph Attention Network for Multi-hop Machine Reading Comprehension0
HFL-RC System at SemEval-2018 Task 11: Hybrid Multi-Aspects Model for Commonsense Reading Comprehension0
HIBOU: an eBook to improve Text Comprehension and Reading Fluency for Beginning Readers of French0
Composing RNNs and FSTs for Small Data: Recovering Missing Characters in Old Hawaiian Text0
Hierarchical Attention Model for Improved Machine Comprehension of Spoken Content0
A Survey on Measuring and Mitigating Reasoning Shortcuts in Machine Reading Comprehension0
Hierarchical Evaluation Framework: Best Practices for Human Evaluation0
Hierarchical Learning for Generation with Long Source Sequences0
Bridging Information-Seeking Human Gaze and Machine Reading Comprehension0
High-throughput Biomedical Relation Extraction for Semi-Structured Web Articles Empowered by Large Language Models0
HoT: Highlighted Chain of Thought for Referencing Supporting Facts from Inputs0
How Context Affects Language Models' Factual Predictions0
Accurate Supervised and Semi-Supervised Machine Reading for Long Documents0
How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks0
emrQA-msquad: A Medical Dataset Structured with the SQuAD V2.0 Framework, Enriched with emrQA Medical Information0
Computational Approaches to Sentence Completion0
How to Pre-Train Your Model? Comparison of Different Pre-Training Models for Biomedical Question Answering0
How Well Do Multi-hop Reading Comprehension Models Understand Date Information?0
Empirical Methods for the Study of Denotation in Nominalizations in Spanish0
How You Ask Matters: The Effect of Paraphrastic Questions to BERT Performance on a Clinical SQuAD Dataset0
HRCA+: Advanced Multiple-choice Machine Reading Comprehension Method0
Human Needs Categorization of Affective Events Using Labeled and Unlabeled Data0
Empirical Evaluation of Post-Training Quantization Methods for Language Tasks0
IdeaReader: A Machine Reading System for Understanding the Idea Flow of Scientific Publications0
A Knowledge Regularized Hierarchical Approach for Emotion Cause Analysis0
Identifying Where to Focus in Reading Comprehension for Neural Question Generation0
Interpretable Semantic Role Relation Table for Supporting Facts Recognition of Reading Comprehension0
Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension0
I Do Not Understand What I Cannot Define: Automatic Question Generation With Pedagogically-Driven Content Selection0
IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with Special Tokens, Re-Ranking, Siamese Encoders and Back Translation0
Investigating the importance of linguistic complexity features across different datasets related to language learning0
IIT-KGP at COIN 2019: Using pre-trained Language Models for modeling Machine Comprehension0
基于阅读理解的汉越跨语言新闻事件要素抽取方法(News Events Element Extraction of Chinese-Vietnamese Cross-language Using Reading Comprehension)0
Emergent Predication Structure in Hidden State Vectors of Neural Readers0
Show:102550
← PrevPage 16 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
4MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified