SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 17011750 of 1760 papers

TitleStatusHype
Understanding Model Robustness to User-generated Noisy TextsCode0
Long Short-Term Memory-Networks for Machine ReadingCode0
QAInfomax: Learning Robust Question Answering System by Mutual Information MaximizationCode0
ChID: A Large-scale Chinese IDiom Dataset for Cloze TestCode0
Thought-Path Contrastive Learning via Premise-Oriented Data Augmentation for Logical Reading ComprehensionCode0
VlogQA: Task, Dataset, and Baseline Models for Vietnamese Spoken-Based Machine Reading ComprehensionCode0
Look, Read and Enrich. Learning from Scientific Figures and their CaptionsCode0
SG-Net: Syntax-Guided Machine Reading ComprehensionCode0
A Question-Focused Multi-Factor Attention Network for Question AnsweringCode0
Shaking Up VLMs: Comparing Transformers and Structured State Space Models for Vision & Language ModelingCode0
LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and VotingCode0
Adversarial Examples for Evaluating Reading Comprehension SystemsCode0
A Wrong Answer or a Wrong Question? An Intricate Relationship between Question Reformulation and Answer Selection in Conversational Question AnsweringCode0
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete ReasoningCode0
SimLabel: Consistency-Guided OOD Detection with Pretrained Vision-Language ModelsCode0
Machine Comprehension by Text-to-Text Neural Question GenerationCode0
U-Net: Machine Reading Comprehension with Unanswerable QuestionsCode0
Machine Comprehension Using Match-LSTM and Answer PointerCode0
Question Answering by Reasoning Across Documents with Graph Convolutional NetworksCode0
Automatic Task Requirements Writing Evaluation via Machine Reading ComprehensionCode0
A quantitative study of NLP approaches to question difficulty estimationCode0
UNIMELB at SemEval-2016 Tasks 4A and 4B: An Ensemble of Neural Networks and a Word2Vec Based Model for Sentiment ClassificationCode0
Question Dependent Recurrent Entity Network for Question AnsweringCode0
Automatic Opinion Question GenerationCode0
Question Directed Graph Attention Network for Numerical Reasoning over TextCode0
Single-Sentence Reader: A Novel Approach for Addressing Answer Position BiasCode0
DRCD: a Chinese Machine Reading Comprehension DatasetCode0
Universal Adversarial Triggers for Attacking and Analyzing NLPCode0
Question Generation by TransformersCode0
Situation and Behavior Understanding by Trope Detection on FilmsCode0
Top K Relevant Passage Retrieval for Biomedical Question AnsweringCode0
Adaptive loose optimization for robust question answeringCode0
Machine Reading of Historical EventsCode0
WangchanLion and WangchanX MRC EvalCode0
Machine Reading Tea Leaves: Automatically Evaluating Topic Coherence and Topic Model QualityCode0
Slot Filling for Biomedical Information ExtractionCode0
Quinductor: a multilingual data-driven method for generating reading-comprehension questions using Universal DependenciesCode0
Quiz Design Task: Helping Teachers Create Quizzes with Automated Question GenerationCode0
Cheap and Good? Simple and Effective Data Augmentation for Low Resource Machine ReadingCode0
Do We Really Need All Those Rich Linguistic Features? A Neural Network-Based Approach to Implicit Sense LabelingCode0
MalAlgoQA: Pedagogical Evaluation of Counterfactual Reasoning in Large Language Models and Implications for AI in EducationCode0
A Puzzle-Based Dataset for Natural Language InferenceCode0
Question Answering as an Automatic Evaluation Metric for News Article SummarizationCode0
R^3: Reinforced Reader-Ranker for Open-Domain Question AnsweringCode0
Do Text Simplification Systems Preserve Meaning? A Human Evaluation via Reading ComprehensionCode0
RACE: Large-scale ReAding Comprehension Dataset From ExaminationsCode0
Smoothing Dialogue States for Open Conversational Machine ReadingCode0
1Cademy @ Causal News Corpus 2022: Enhance Causal Span Detection via Beam-Search-based Position SelectorCode0
Ranking Paragraphs for Improving Answer Recall in Open-Domain Question AnsweringCode0
RankQA: Neural Question Answering with Answer Re-RankingCode0
Show:102550
← PrevPage 35 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified