SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 15011550 of 1760 papers

TitleStatusHype
Multi-task Learning with Sample Re-weighting for Machine Reading ComprehensionCode0
Symmetric Regularization based BERT for Pair-wise Semantic ReasoningCode0
Dataset for the First Evaluation on Chinese Machine Reading ComprehensionCode0
Synonym Knowledge Enhanced Reader for Chinese Idiom Reading ComprehensionCode0
Rethinking Label Smoothing on Multi-hop Question AnsweringCode0
Multi-View Graph Representation Learning for Answering Hybrid Numerical Reasoning QuestionCode0
Improving Reading Comprehension Question Generation with Data Augmentation and Overgenerate-and-rankCode0
Arithmetic-Based Pretraining -- Improving Numeracy of Pretrained Language ModelsCode0
NE-Table: A Neural key-value table for Named EntitiesCode0
Evidence Sentence Extraction for Machine Reading ComprehensionCode0
Named Entity Recognition via Machine Reading Comprehension: A Multi-Task Learning ApproachCode0
Data Augmentation for Sparse Multidimensional Learning Performance Data Using Generative AICode0
Evidence Aggregation for Answer Re-Ranking in Open-Domain Question AnsweringCode0
Event-Centric Question Answering via Contrastive Learning and Invertible Event TransformationCode0
Using LLMs in Generating Design Rationale for Software Architecture DecisionsCode0
An Understanding-Oriented Robust Machine Reading Comprehension ModelCode0
Natural Response Generation for Chinese Reading ComprehensionCode0
Negation in Cognitive ReasoningCode0
Data Augmentation for Biomedical Factoid Question AnsweringCode0
What Makes Reading Comprehension Questions Easier?Code0
Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading ComprehensionCode0
Using Natural Language Relations between Answer Choices for Machine ComprehensionCode0
BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on NovelsCode0
Neural Arabic Question AnsweringCode0
BioRead: A New Dataset for Biomedical Reading ComprehensionCode0
Evaluating LLMs for Targeted Concept Simplification for Domain-Specific TextsCode0
Attention-over-Attention Neural Networks for Reading ComprehensionCode0
Instance Regularization for Discriminative Language Model Pre-trainingCode0
Instructive Dialogue Summarization with Query AggregationsCode0
Review Conversational Reading ComprehensionCode0
XQA-DST: Multi-Domain and Multi-Lingual Dialogue State TrackingCode0
BIOMRC: A Dataset for Biomedical Machine Reading ComprehensionCode0
RoBIn: A Transformer-Based Model For Risk Of Bias Inference With Machine Reading ComprehensionCode0
Cross-Lingual Question Answering over Knowledge Base as Reading ComprehensionCode0
Evaluating Large Language Models on Controlled Generation TasksCode0
Evaluating Commonsense in Pre-trained Language ModelsCode0
Interactive Machine Comprehension with Information Seeking AgentsCode0
Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?Code0
Tackling Graphical NLP problems with Graph Recurrent NetworksCode0
InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and CompositionCode0
A Simple and Effective Model for Answering Multi-span QuestionsCode0
What Makes Reading Comprehension Questions Difficult?Code0
Cross-Lingual Machine Reading ComprehensionCode0
Improving the Robustness of QA Models to Challenge Sets with Variational Question-Answer Pair GenerationCode0
Task Transfer and Domain Adaptation for Zero-Shot Question AnsweringCode0
Interpreting Themes from Educational StoriesCode0
ET5: A Novel End-to-end Framework for Conversational Machine Reading ComprehensionCode0
A Thorough Examination of the CNN/Daily Mail Reading Comprehension TaskCode0
NLPContributions: An Annotation Scheme for Machine Reading of Scholarly Contributions in Natural Language Processing LiteratureCode0
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and Instruction-FollowingCode0
Show:102550
← PrevPage 31 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified