SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 701725 of 1760 papers

TitleStatusHype
OpenQA: Hybrid QA System Relying on Structured Knowledge Base as well as Non-structured Data0
Multi-Row, Multi-Span Distant Supervision For Table+Text Question0
Roof-Transformer: Divided and Joined Understanding with Knowledge Enhancement0
Native Chinese Reader: A Dataset Towards Native-Level Chinese Machine Reading Comprehension0
A Puzzle-Based Dataset for Natural Language InferenceCode0
From Good to Best: Two-Stage Training for Cross-lingual Machine Reading Comprehension0
Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-sentence Dependency GraphCode0
TunBERT: Pretrained Contextualized Text Representation for Tunisian Dialect0
Towards Interpretable and Reliable Reading Comprehension: A Pipeline Model with Unanswerability Prediction0
Understanding Attention in Machine Reading Comprehension0
Question Generation for Reading Comprehension Assessment by Modeling How and What to Ask0
Automatic Mining of Salient Events from Multiple Documents0
Retrieval-guided Counterfactual Generation for QA0
ViQuAE, a Dataset for Knowledge-based Visual Question Answering about Named EntitiesCode0
One General Teacher for Multi-Data Multi-Task: A New Knowledge Distillation Framework for Discourse Relation Analysis0
Context-Paraphrase Enhanced Commonsense Question Answering0
Models can use keywords to answer questions that human cannot0
How Well Do Multi-hop Reading Comprehension Models Understand Date Information?0
ViQA-COVID: COVID-19 Machine Reading Comprehension Dataset for Vietnamese0
What Makes Machine Reading Comprehension Questions Difficult? Investigating Variation in Passage Sources and Question Types0
EveMRC: A Two-stage Evidence Modeling For Multi-choice Machine Reading Comprehension0
Unsupervised Open-Domain Question Answering with Higher Answerability0
Calibration of Machine Reading Systems at Scale0
Slot Filling for Biomedical Information Extraction0
On the Robustness of Reading Comprehension Models to Entity Renaming0
Show:102550
← PrevPage 29 of 71Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified