SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 401425 of 1760 papers

TitleStatusHype
Chaining Event Spans for Temporal Relation GroundingCode0
Are you tough enough? Framework for Robustness Validation of Machine Comprehension SystemsCode0
Treatment effects without multicollinearity? Temporal order and the Gram-Schmidt process in causal inferenceCode0
Evaluating Commonsense in Pre-trained Language ModelsCode0
Evaluating Large Language Models on Controlled Generation TasksCode0
Learning Graph Representation of Agent DiffusersCode0
Learning to Attend On Essential Terms: An Enhanced Retriever-Reader Model for Open-domain Question AnsweringCode0
Evidence Sentence Extraction for Machine Reading ComprehensionCode0
FactGuard: Leveraging Multi-Agent Systems to Generate Answerable and Unanswerable Questions for Enhanced Long-Context LLM ExtractionCode0
Attention-over-Attention Neural Networks for Reading ComprehensionCode0
From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine ReaderCode0
Attentive Memory Networks: Efficient Machine Reading for Conversational SearchCode0
JBNU-CCLab at SemEval-2022 Task 12: Machine Reading Comprehension and Span Pair Classification for Linking Mathematical Symbols to Their DescriptionsCode0
Lexical Generalization Improves with Larger Models and Longer TrainingCode0
Question Answering as an Automatic Evaluation Metric for News Article SummarizationCode0
Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and ModelsCode0
Capturing Greater Context for Question GenerationCode0
Can We Guide a Multi-Hop Reasoning Language Model to Incrementally Learn at Each Single-Hop?Code0
LogiQA 2.0—An Improved Dataset for Logical Reasoning in Natural Language UnderstandingCode0
A Reading Comprehension Corpus for Machine Translation EvaluationCode0
CoQA: A Conversational Question Answering ChallengeCode0
Coreference-aware Double-channel Attention Network for Multi-party Dialogue Reading ComprehensionCode0
Coreference Reasoning in Machine Reading ComprehensionCode0
Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical OverlapCode0
Episodic Memory Reader: Learning What to Remember for Question Answering from Streaming DataCode0
Show:102550
← PrevPage 17 of 71Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified