SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 10761100 of 1760 papers

TitleStatusHype
Focus Annotation of Task-based Data: Establishing the Quality of Crowd Annotation0
Focus Annotation of Task-based Data: A Comparison of Expert and Crowd-Sourced Annotation in a Reading Comprehension Corpus0
ForceReader: a BERT-based Interactive Machine Reading Comprehension Model with Attention Separation0
ForecastQA: A Question Answering Challenge for Event Forecasting with Temporal Text Data0
FPAI at SemEval-2020 Task 10: A Query Enhanced Model with RoBERTa for Emphasis Selection0
FQuAD2.0: French Question Answering and knowing that you know nothing0
FQuAD2.0: French Question Answering and Learning When You Don’t Know0
FQuAD: French Question Answering Dataset0
FriendsQA: Open-Domain Question Answering on TV Show Transcripts0
From Good to Best: Two-Stage Training for Cross-lingual Machine Reading Comprehension0
From Light to Rich ERE: Annotation of Entities, Relations, and Events0
Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples0
G4: Grounding-guided Goal-oriented Dialogues Generation with Multiple Documents0
GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions0
Gated Self-Matching Networks for Reading Comprehension and Question Answering0
Gaze-Driven Sentence Simplification for Language Learners: Enhancing Comprehension and Readability0
General Embedding vs. Task-Specific Embedding: A Comparative Approach to Enhancing NLP Performance0
Generalizing Question Answering System with Pre-trained Language Model Fine-tuning0
Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution0
Generating Diagnostic Multiple Choice Comprehension Cloze Questions0
Generating Feedback for English Foreign Language Exercises0
Generating Questions and Multiple-Choice Answers using Semantic Analysis of Texts0
Generating Questions for Reading Comprehension using Coherence Relations0
Generating Training Data for Semantic Role Labeling based on Label Transfer from Linked Lexical Resources0
Generative Large Language Models Are All-purpose Text Analytics Engines: Text-to-text Learning Is All Your Need0
Show:102550
← PrevPage 44 of 71Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified