SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 15511600 of 1760 papers

TitleStatusHype
Evaluation of Instruction-Following Ability for Large Language Models on Story-Ending Generation0
EveMRC: A Two-stage Evidence Modeling For Multi-choice Machine Reading Comprehension0
Event Detection via Derangement Reading Comprehension0
Event Extraction as Machine Reading Comprehension0
Event Extraction as Multi-turn Question Answering0
Everything Happens for a Reason: Discovering the Purpose of Actions in Procedural Text0
ExcavatorCovid: Extracting Events and Relations from Text Corpora for Temporal and Causal Analysis for COVID-190
LLM-aided explanations of EDA synthesis errors0
Explanation Generation for a Math Word Problem Solver0
Explicit Alignment and Many-to-many Entailment Based Reasoning for Conversational Machine Reading0
Exploiting Multiple Sources for Open-Domain Hypernym Discovery0
Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension0
Exploring Autonomous Agents through the Lens of Large Language Models: A Review0
Exploring Gap Filling as a Cheaper Alternative to Reading Comprehension Questionnaires when Evaluating Machine Translation for Gisting0
Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks0
Explicit Utilization of General Knowledge in Machine Reading Comprehension0
EXPLORING NEURAL ARCHITECTURE SEARCH FOR LANGUAGE TASKS0
Exploring Probabilistic Soft Logic as a framework for integrating top-down and bottom-up processing of language in a task context0
Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering0
Exploring Semantic Properties of Sentence Embeddings0
Exploring the BERT Cross-Lingual Transferability: a Case Study in Reading Comprehension0
Exploring the Intersection of Short Answer Assessment, Authorship Attribution, and Plagiarism Detection0
Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey0
Exploring the Potential of Large Language Models for Estimating the Reading Comprehension Question Difficulty0
Extracting Structured Scholarly Information from the Machine Translation Literature0
Eye Tracking as a Tool for Machine Translation Error Analysis0
FabricQA-Extractor: A Question Answering System to Extract Information from Documents using Natural Language Questions0
Facial Electromyography-based Adaptive Virtual Reality Gaming for Cognitive Training0
FCM: A Fine-grained Comparison Model for Multi-turn Dialogue Reasoning0
Feature-augmented Machine Reading Comprehension with Auxiliary Tasks0
Feature-Rich Two-Stage Logistic Regression for Monolingual Alignment0
Feeding What You Need by Understanding What You Learned0
Fewer Truncations Improve Language Modeling0
Few-shot Mining of Naturally Occurring Inputs and Outputs0
Few-shot Policy (de)composition in Conversational Question Answering0
Filling a Knowledge Graph with a Crowd0
Focus Annotation in Reading Comprehension Data0
Focus Annotation of Task-based Data: Establishing the Quality of Crowd Annotation0
Focus Annotation of Task-based Data: A Comparison of Expert and Crowd-Sourced Annotation in a Reading Comprehension Corpus0
ForceReader: a BERT-based Interactive Machine Reading Comprehension Model with Attention Separation0
ForecastQA: A Question Answering Challenge for Event Forecasting with Temporal Text Data0
FPAI at SemEval-2020 Task 10: A Query Enhanced Model with RoBERTa for Emphasis Selection0
FQuAD2.0: French Question Answering and knowing that you know nothing0
FQuAD2.0: French Question Answering and Learning When You Don’t Know0
FQuAD: French Question Answering Dataset0
FriendsQA: Open-Domain Question Answering on TV Show Transcripts0
From Good to Best: Two-Stage Training for Cross-lingual Machine Reading Comprehension0
From Light to Rich ERE: Annotation of Entities, Relations, and Events0
Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples0
G4: Grounding-guided Goal-oriented Dialogues Generation with Multiple Documents0
Show:102550
← PrevPage 32 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified