SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 701750 of 1760 papers

TitleStatusHype
Ensemble Learning-Based Approach for Improving Generalization Capability of Machine Reading Comprehension Systems0
A Search Engine for Scientific Publications: a Cybersecurity Case Study0
Machine Reading of Hypotheses for Organizational Research Reviews and Pre-trained Models via R Shiny App for Non-Programmers0
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin InformationCode1
Zero-Shot Estimation of Base Models' Weights in Ensemble of Machine Reading Comprehension Systems for Robust Generalization0
Unsupervised Technique To Conversational Machine Reading0
Analyzing Research Trends in Inorganic Materials Literature Using NLPCode0
Answering Chinese Elementary School Social Study Multiple Choice Questions0
OKGIT: Open Knowledge Graph Link Prediction with Implicit TypesCode0
PALRACE: Reading Comprehension Dataset with Human Data and Labeled Rationales0
Open Temporal Relation Extraction for Question Answering0
What is Missing in Existing Multi-hop Datasets? Toward Deeper Multi-hop Reasoning Task0
Adversarial Training for Machine Reading Comprehension with Virtual Embeddings0
Cheap and Good? Simple and Effective Data Augmentation for Low Resource Machine ReadingCode0
Bilingual Alignment Pre-Training for Zero-Shot Cross-Lingual TransferCode0
Why Machine Reading Comprehension Models Learn Shortcuts?Code1
Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?Code0
Knowing More About Questions Can Help: Improving Calibration in Question AnsweringCode1
Towards Multi-Modal Text-Image Retrieval to improve Human Reading0
Does Structure Matter? Encoding Documents for Machine Reading Comprehension0
THG: Transformer with Hyperbolic Geometry0
Looking Beyond Sentence-Level Natural Language Inference for Question Answering and Text Summarization0
RECONSIDER: Improved Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering0
A Multilingual Modeling Method for Span-Extraction Reading Comprehension0
SemEval-2021 Task 4: Reading Comprehension of Abstract MeaningCode1
NEUer at SemEval-2021 Task 4: Complete Summary Representation by Filling Answers into Question for Matching Reading Comprehension0
Using Adversarial Attacks to Reveal the Statistical Bias in Machine Reading Comprehension Models0
Fact-driven Logical Reasoning for Machine Reading ComprehensionCode1
KLUE: Korean Language Understanding EvaluationCode1
Sentence Extraction-Based Machine Reading Comprehension for Vietnamese0
Question-Driven Span Labeling Model for Aspect–Opinion Pair Extraction0
Dependency Parsing as MRC-based Span-Span PredictionCode1
Predicting Text Readability from Scrolling InteractionsCode1
REPT: Bridging Language Models and Machine Reading Comprehension via Retrieval-Based Pre-trainingCode0
ExpMRC: Explainability Evaluation for Machine Reading ComprehensionCode1
Lawformer: A Pre-trained Language Model for Chinese Legal Long DocumentsCode1
Improving Cross-Lingual Reading Comprehension with Self-Training0
Logic-Driven Context Extension and Data Augmentation for Logical Reasoning of TextCode1
NLP-IIS@UT at SemEval-2021 Task 4: Machine Reading Comprehension using the Long Document Transformer0
VAULT: VAriable Unified Long Text Representation for Machine Reading Comprehension0
ExcavatorCovid: Extracting Events and Relations from Text Corpora for Temporal and Causal Analysis for COVID-190
Conversational Machine Reading Comprehension for Vietnamese Healthcare TextsCode0
Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks0
MRCBert: A Machine Reading ComprehensionApproach for Unsupervised SummarizationCode0
DADgraph: A Discourse-aware Dialogue Graph Neural Network for Multiparty Dialogue Machine Reading Comprehension0
GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval0
PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel ComputationCode1
BERT-CoQAC: BERT-based Conversational Question Answering in Context0
Towards Solving Multimodal Comprehension0
Learning with Instance Bundles for Reading Comprehension0
Show:102550
← PrevPage 15 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified