SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 9511000 of 1760 papers

TitleStatusHype
Thread of Thought Unraveling Chaotic Contexts0
Time Matters: Enhancing Pre-trained News Recommendation Models with Robust User Dwell Time Injection0
Ti-Reader: 基于注意力机制的藏文机器阅读理解端到端网络模型(Ti-Reader: An End-to-End Network Model Based on Attention Mechanisms for Tibetan Machine Reading Comprehension)0
To Answer or Not to Answer? Improving Machine Reading Comprehension Model with Span-based Contrastive Learning0
Token-level Dynamic Self-Attention Network for Multi-Passage Reading Comprehension0
Topic Knowledge Acquisition and Utilization for Machine Reading Comprehension in Social Media Domain0
TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions0
To Test Machine Comprehension, Start by Defining Comprehension0
Towards a more Robust Evaluation for Conversational Question Answering0
Towards AMR-BR: A SemBank for Brazilian Portuguese Language0
Towards an Automatic Text Comprehension for the Arabic Question-Answering: Semantic and Logical Representation of Texts0
Towards a Psychology of Machines: Large Language Models Predict Human Memory0
Towards Broad-coverage Meaning Representation: The Case of Comparison Structures0
Towards Building a Robust Knowledge Intensive Question Answering Model with Large Language Models0
Towards Confident Machine Reading Comprehension0
Towards Flow Graph Prediction of Open-Domain Procedural Texts0
Towards Human-Like Machine Comprehension: Few-Shot Relational Learning in Visually-Rich Documents0
Towards Inference-Oriented Reading Comprehension: ParallelQA0
Towards Interpretable and Reliable Reading Comprehension: A Pipeline Model with Unanswerability Prediction0
Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine0
Towards Machine Reading for Interventions from Humanitarian-Assistance Program Literature0
Towards Medical Machine Reading Comprehension with Structural Knowledge and Plain Text0
Towards Multi-Modal Text-Image Retrieval to improve Human Reading0
Towards Question Format Independent Numerical Reasoning: A Set of Prerequisite Tasks0
Towards Robust Neural Retrieval Models with Synthetic Pre-Training0
Towards Solving Multimodal Comprehension0
To What Extent Do Natural Language Understanding Datasets Correlate to Logical Reasoning? A Method for Diagnosing Logical Reasoning.0
Tradeoffs in Sentence Selection Techniques for Open-Domain Question Answering0
Training a Ranking Function for Open-Domain Question Answering0
Transfer Learning Enhanced Single-choice Decision for Multi-choice Question Answering0
Transferring Semantic Knowledge Into Language Encoders0
TransformLLM: Adapting Large Language Models via LLM-Transformed Reading Comprehension Text0
Transition-Based Chinese AMR Parsing0
Transliteration Better than Translation? Answering Code-mixed Questions over a Knowledge Base0
Trigger-free Event Detection via Derangement Reading Comprehension0
TunBERT: Pretrained Contextualized Text Representation for Tunisian Dialect0
Two-Turn Debate Doesn't Help Humans Answer Hard Reading Comprehension Questions0
U3E: Unsupervised and Erasure-based Evidence Extraction for Machine Reading Comprehension0
UCL Machine Reading Group: Four Factor Framework For Fact Finding (HexaF)0
Uncertainty-Based Adaptive Learning for Reading Comprehension0
Uncertainty Modeling for Machine Comprehension Systems using Efficient Bayesian Neural Networks0
Undersensitivity in Neural Reading Comprehension0
Understand before Answer: Improve Temporal Reading Comprehension via Precise Question Understanding0
Understand before Answer: Improve Temporal Reading Comprehension via Precise Question Understanding0
Understanding Attention in Machine Reading Comprehension0
Understanding Dataset Design Choices for Multi-hop Reasoning0
Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task0
Understanding Procedural Text using Interactive Entity Networks0
Understanding the Polarity of Events in the Biomedical Literature: Deep Learning vs. Linguistically-informed Methods0
Undivided Attention: Are Intermediate Layers Necessary for BERT?0
Show:102550
← PrevPage 20 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified