SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 501550 of 1760 papers

TitleStatusHype
A Parallel-Hierarchical Model for Machine Comprehension on Sparse DataCode0
FlowDelta: Modeling Flow Information Gain in Reasoning for Conversational Machine ComprehensionCode0
Guiding LLM to Fool Itself: Automatically Manipulating Machine Reading Comprehension Shortcut TriggersCode0
Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading ComprehensionCode0
Synonym Knowledge Enhanced Reader for Chinese Idiom Reading ComprehensionCode0
FAT ALBERT: Finding Answers in Large Texts using Semantic Similarity Attention Layer based on BERTCode0
Fast Reading Comprehension with ConvNetsCode0
D-NET: A Pre-Training and Fine-Tuning Framework for Improving the Generalization of Machine Reading ComprehensionCode0
FedQAS: Privacy-aware machine reading comprehension with federated learningCode0
An Understanding-Oriented Robust Machine Reading Comprehension ModelCode0
FactGuard: Leveraging Multi-Agent Systems to Generate Answerable and Unanswerable Questions for Enhanced Long-Context LLM ExtractionCode0
FairytaleQA Translated: Enabling Educational Question and Answer Generation in Less-Resourced LanguagesCode0
A Causal View of Entity Bias in (Large) Language ModelsCode0
Extract, Integrate, Compete: Towards Verification Style Reading ComprehensionCode0
FastFusionNet: New State-of-the-Art for DAWNBench SQuADCode0
BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on NovelsCode0
BioRead: A New Dataset for Biomedical Reading ComprehensionCode0
Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading ComprehensionCode0
BIOMRC: A Dataset for Biomedical Machine Reading ComprehensionCode0
Exploiting Explicit Paths for Multi-hop Reading ComprehensionCode0
Exploiting Word Semantics to Enrich Character Representations of Chinese Pre-trained ModelsCode0
Exploring Contrast Consistency of Open-Domain Question Answering Systems on Minimally Edited QuestionsCode0
EviDR: Evidence-Emphasized Discrete Reasoning for Reasoning Machine Reading ComprehensionCode0
Evidence Sentence Extraction for Machine Reading ComprehensionCode0
Explaining Interactions Between Text SpansCode0
DuReader_robust: A Chinese Dataset Towards Evaluating Robustness and Generalization of Machine Reading Comprehension in Real-World ApplicationsCode0
BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension TaskCode0
DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world ApplicationsCode0
Event-Centric Question Answering via Contrastive Learning and Invertible Event TransformationCode0
Exploring Recommendation Capabilities of GPT-4V(ision): A Preliminary Case StudyCode0
Evaluating Commonsense in Pre-trained Language ModelsCode0
Bilingual Alignment Pre-Training for Zero-Shot Cross-Lingual TransferCode0
ET5: A Novel End-to-end Framework for Conversational Machine Reading ComprehensionCode0
Answering Naturally: Factoid to Full length Answer GenerationCode0
Dual Ask-Answer Network for Machine Reading ComprehensionCode0
DTW at Qur’an QA 2022: Utilising Transfer Learning with Transformers for Question Answering in a Low-resource DomainCode0
Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity LinkingCode0
Evaluating Large Language Models on Controlled Generation TasksCode0
DTW at Qur'an QA 2022: Utilising Transfer Learning with Transformers for Question Answering in a Low-resource DomainCode0
Estimating Linguistic Complexity for Science TextsCode0
Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?Code0
DuoRC: Towards Complex Language Understanding with Paraphrased Reading ComprehensionCode0
Episodic Memory Reader: Learning What to Remember for Question Answering from Streaming DataCode0
EQuANt (Enhanced Question Answer Network)Code0
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over ParagraphsCode0
Dynamic Chunking and Selection for Reading Comprehension of Ultra-Long Context in Large Language ModelsCode0
DREAM: A Challenge Dataset and Models for Dialogue-Based Reading ComprehensionCode0
Evidence Aggregation for Answer Re-Ranking in Open-Domain Question AnsweringCode0
Adaptation of Deep Bidirectional Multilingual Transformers for Russian LanguageCode0
Bidirectional Attention for SQL GenerationCode0
Show:102550
← PrevPage 11 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified