SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 201250 of 1760 papers

TitleStatusHype
Dependency Parsing as MRC-based Span-Span PredictionCode1
Document Modeling with Graph Attention Networks for Multi-grained Machine Reading ComprehensionCode1
Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential ReasoningCode1
ReadBench: Measuring the Dense Text Visual Reading Ability of Vision-Language ModelsCode1
Reading Wikipedia to Answer Open-Domain QuestionsCode1
Reasoning in Dialog: Improving Response Generation by Context Reading ComprehensionCode1
Benchmarking: Past, Present and FutureCode1
Relational Surrogate Loss LearningCode1
Retrospective Reader for Machine Reading ComprehensionCode1
Revealing the Importance of Semantic Retrieval for Machine Reading at ScaleCode1
Benchmarking Robustness of Machine Reading Comprehension ModelsCode1
A Unified MRC Framework for Named Entity RecognitionCode1
BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment AnalysisCode1
S^3HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid Question AnsweringCode1
Adversarial Training for Commonsense InferenceCode1
End-to-End Chinese Speaker IdentificationCode1
Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading ComprehensionCode1
Automated Scoring for Reading Comprehension via In-context BERT TuningCode1
EMT: Explicit Memory Tracker with Coarse-to-Fine Reasoning for Conversational Machine ReadingCode1
Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language ModelsCode1
ELASTIC: Numerical Reasoning with Adaptive Symbolic CompilerCode1
Single-dataset Experts for Multi-dataset Question AnsweringCode1
From LSAT: The Progress and Challenges of Complex ReasoningCode1
Structural Characterization for Dialogue DisentanglementCode1
EntQA: Entity Linking as Question AnsweringCode1
Teaching Machine Comprehension with Compositional ExplanationsCode1
HAE-RAE Bench: Evaluation of Korean Knowledge in Language ModelsCode1
ESTER: A Machine Reading Comprehension Dataset for Event Semantic Relation ReasoningCode1
Estimating Contamination via Perplexity: Quantifying Memorisation in Language Model EvaluationCode1
Introspective Distillation for Robust Question AnsweringCode1
Evaluating Models' Local Decision Boundaries via Contrast SetsCode1
Connecting Attributions and QA Model Behavior on Realistic CounterfactualsCode1
Towards AI-Complete Question Answering: A Set of Prerequisite Toy TasksCode1
Tracing Origins: Coreference-aware Machine Reading ComprehensionCode1
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading ComprehensionCode1
Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4Code1
Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning SkillsCode1
ViQuAE, a Dataset for Knowledge-based Visual Question Answering about Named EntitiesCode1
Listen, Attend and SpellCode1
Optimizing Deeper Transformers on Small DatasetsCode1
What do Models Learn from Question Answering Datasets?Code1
LatestEval: Addressing Data Contamination in Language Model Evaluation through Dynamic and Time-Sensitive Test ConstructionCode1
Words or Characters? Fine-grained Gating for Reading ComprehensionCode1
ExpMRC: Explainability Evaluation for Machine Reading ComprehensionCode1
FedQAS: Privacy-aware machine reading comprehension with federated learningCode0
FAT ALBERT: Finding Answers in Large Texts using Semantic Similarity Attention Layer based on BERTCode0
Annotating picture description task responses for content analysisCode0
Act-Aware Slot-Value Predicting in Multi-Domain Dialogue State TrackingCode0
FastFusionNet: New State-of-the-Art for DAWNBench SQuADCode0
Fast Reading Comprehension with ConvNetsCode0
Show:102550
← PrevPage 5 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified