SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 15511600 of 1760 papers

TitleStatusHype
A Span-Extraction Dataset for Chinese Machine Reading ComprehensionCode0
Answering Naturally: Factoid to Full length Answer GenerationCode0
BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension TaskCode0
Routing Networks and the Challenges of Modular and Compositional ComputationCode0
What's Missing: A Knowledge Gap Guided Approach for Multi-hop Question AnsweringCode0
Cross-functional Analysis of Generalisation in Behavioural LearningCode0
NoticIA: A Clickbait Article Summarization Dataset in SpanishCode0
Medical device surveillance with electronic health recordsCode0
Is the Understanding of Explicit Discourse Relations Required in Machine Reading Comprehension?Code0
Iterative Alternating Neural Attention for Machine ReadingCode0
Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented DataCode0
IUCM at SemEval-2018 Task 11: Similar-Topic Texts as a Comprehension Knowledge SourceCode0
Jack the Reader -- A Machine Reading FrameworkCode0
Estimating Linguistic Complexity for Science TextsCode0
EQuANt (Enhanced Question Answer Network)Code0
JBNU-CCLab at SemEval-2022 Task 12: Machine Reading Comprehension and Span Pair Classification for Linking Mathematical Symbols to Their DescriptionsCode0
JECC: Commonsense Reasoning Tasks Derived from Interactive FictionsCode0
Bilingual Alignment Pre-Training for Zero-Shot Cross-Lingual TransferCode0
NumNet: Machine Reading Comprehension with Numerical ReasoningCode0
NUT-RC: Noisy User-generated Text-oriented Reading ComprehensionCode0
ODSQA: Open-domain Spoken Question Answering DatasetCode0
OKGIT: Open Knowledge Graph Link Prediction with Implicit TypesCode0
Annotating picture description task responses for content analysisCode0
Technical Question Answering across Tasks and DomainsCode0
Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity LinkingCode0
Bidirectional Attention for SQL GenerationCode0
Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous SpaceCode0
Asking Again and Again: Exploring LLM Robustness to Repeated QuestionsCode0
Templates of generic geographic information for answering where-questionsCode0
Joint Learning of Sentence Embeddings for Relevance and EntailmentCode0
On the Impact of Speech Recognition Errors in Passage Retrieval for Spoken Question AnsweringCode0
Temporal Reasoning via Audio Question AnsweringCode0
An Information-Theoretic Approach to Analyze NLP Classification TasksCode0
Katecheo: A Portable and Modular System for Multi-Topic Question AnsweringCode0
KazQAD: Kazakh Open-Domain Question Answering DatasetCode0
Bidirectional Attention Flow for Machine ComprehensionCode0
Episodic Memory Reader: Learning What to Remember for Question Answering from Streaming DataCode0
Entity Tracking Improves Cloze-style Reading ComprehensionCode0
Coreference Reasoning in Machine Reading ComprehensionCode0
Entity-Relation Extraction as Multi-Turn Question AnsweringCode0
Biased or Flawed? Mitigating Stereotypes in Generative Language Models by Addressing Task-Specific FlawsCode0
Are you tough enough? Framework for Robustness Validation of Machine Comprehension SystemsCode0
On the Trade-off between Redundancy and Local Coherence in SummarizationCode0
On Understanding the Relation between Expert Annotations of Text Readability and Target Reader ComprehensionCode0
Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading ComprehensionCode0
XRJL-HKUST at SemEval-2021 Task 4: WordNet-Enhanced Dual Multi-head Co-Attention for Reading Comprehension of Abstract MeaningCode0
Knowing-how & Knowing-that: A New Task for Machine Comprehension of User ManualsCode0
English Machine Reading Comprehension Datasets: A SurveyCode0
SciDQA: A Deep Reading Comprehension Dataset over Scientific PapersCode0
Coreference-aware Double-channel Attention Network for Multi-party Dialogue Reading ComprehensionCode0
Show:102550
← PrevPage 32 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
4MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified