SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 15011525 of 1760 papers

TitleStatusHype
Evaluation of Automatically Generated Pronoun Reference Questions0
Simplifying metaphorical language for young readers: A corpus study on news text0
Splitting Complex English Sentences0
Investigating neural architectures for short answer scoring0
Continuous fluency tracking and the challenges of varying text complexity0
Question Generation for Question Answering0
Reasoning with Heterogeneous Knowledge for Commonsense Machine Comprehension0
World Knowledge for Reading Comprehension: Rare Entity Prediction with Hierarchical LSTMs Using External Descriptions0
A Question Answering Approach for Emotion Cause Extraction0
Story Comprehension for Predicting What Happens Next0
Accurate Supervised and Semi-Supervised Machine Reading for Long Documents0
Multi-task Attention-based Neural Networks for Implicit Discourse Relationship Representation and Identification0
Dict2vec : Learning Word Embeddings using Lexical DictionariesCode0
Identifying Where to Focus in Reading Comprehension for Neural Question Generation0
Document-Level Multi-Aspect Sentiment Classification as Machine Comprehension0
Getting the Most out of AMR Parsing0
Learning what to read: Focused machine reading0
R^3: Reinforced Reader-Ranker for Open-Domain Question AnsweringCode0
A Question Answering Approach to Emotion Cause Extraction0
Know-Center at SemEval-2017 Task 10: Sequence Classification with the CODE Annotator0
MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension0
Question Dependent Recurrent Entity Network for Question AnsweringCode0
Adversarial Examples for Evaluating Reading Comprehension SystemsCode0
Are You Smarter Than a Sixth Grader? Textbook Question Answering for Multimodal Machine Comprehension0
Swanson linking revisited: Accelerating literature-based discovery across domains using a conceptual influence graph0
Show:102550
← PrevPage 61 of 71Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified