SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 76100 of 1760 papers

TitleStatusHype
On the token distance modeling ability of higher RoPE attention dimension0
Increasing the Difficulty of Automatically Generated Questions via Reinforcement Learning with Synthetic Preference0
Fine-Grained Prediction of Reading Comprehension from Eye MovementsCode0
Punctuation Prediction for Polish Texts using Transformers0
Adaptive Question Answering: Enhancing Language Model Proficiency for Addressing Knowledge Conflicts with Source Citations0
On the Inductive Bias of Stacking Towards Improving Reasoning0
Rehearsing Answers to Probable Questions with Perspective-Taking0
Training Language Models to Win Debates with Self-Play Improves Judge AccuracyCode1
Data Augmentation for Sparse Multidimensional Learning Performance Data Using Generative AICode0
Thought-Path Contrastive Learning via Premise-Oriented Data Augmentation for Logical Reading ComprehensionCode0
Towards Building a Robust Knowledge Intensive Question Answering Model with Large Language Models0
Shaking Up VLMs: Comparing Transformers and Structured State Space Models for Vision & Language ModelingCode0
Evaluating Large Language Models with Tests of Spanish as a Foreign Language: Pass or Fail?0
Seemingly Plausible Distractors in Multi-Hop Reasoning: Are Large Language Models Attentive Readers?Code0
DiVA-DocRE: A Discriminative and Voice-Aware Paradigm for Document-Level Relation Extraction0
Bypassing DARCY Defense: Indistinguishable Universal Adversarial Triggers0
DataSculpt: Crafting Data Landscapes for Long-Context LLMs through Multi-Objective PartitioningCode1
FabricQA-Extractor: A Question Answering System to Extract Information from Documents using Natural Language Questions0
Investigating a Benchmark for Training-set free Evaluation of Linguistic Capabilities in Machine Reading Comprehension0
Enhancing Robustness of Retrieval-Augmented Language Models with In-Context Learning0
AutoFAIR : Automatic Data FAIRification via Machine Reading0
Developing PUGG for Polish: A Modern Approach to KBQA, MRC, and IR Dataset ConstructionCode0
SNFinLLM: Systematic and Nuanced Financial Domain Adaptation of Chinese Large Language Models0
Recent Advances in Multi-Choice Machine Reading Comprehension: A Survey on Methods and Datasets0
SAT3D: Image-driven Semantic Attribute Transfer in 3D0
Show:102550
← PrevPage 4 of 71Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified