SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 12511300 of 1760 papers

TitleStatusHype
Medical device surveillance with electronic health recordsCode0
BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment AnalysisCode1
Structural Scaffolds for Citation Intent Classification in Scientific PublicationsCode0
Unsupervised Abbreviation Disambiguation Contextual disambiguation using word embeddings0
Making Neural Machine Reading Comprehension Faster0
Sogou Machine Reading Comprehension ToolkitCode0
Knowledge Aware Conversation Generation with Explainable Reasoning over Augmented GraphsCode0
Episodic Memory Reader: Learning What to Remember for Question Answering from Streaming DataCode0
Option Comparison Network for Multiple-choice Reading Comprehension0
DREAM: A Challenge Data Set and Models for Dialogue-Based Reading Comprehension0
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over ParagraphsCode0
FastFusionNet: New State-of-the-Art for DAWNBench SQuADCode0
Leveraging Knowledge Bases in LSTMs for Improving Machine Reading0
Multi-Relational Question Answering from Narratives: Machine Reading and Reasoning in Simulated Worlds0
Evidence Sentence Extraction for Machine Reading ComprehensionCode0
Language Models are Unsupervised Multitask LearnersCode1
SECTOR: A Neural Model for Coherent Topic Segmentation and ClassificationCode0
Machine Reading Comprehension for Answer Re-Ranking in Customer Support Chatbots0
End-to-End Open-Domain Question Answering with BERTseriniCode0
Review Conversational Reading ComprehensionCode0
DREAM: A Challenge Dataset and Models for Dialogue-Based Reading ComprehensionCode0
Dual Co-Matching Network for Multi-choice Reading Comprehension0
HAS-QA: Hierarchical Answer Spans Model for Open-domain Question Answering0
Multi-Perspective Fusion Network for Commonsense Reading Comprehension0
Multi-style Generative Reading Comprehension0
Delta Embedding Learning0
SDNet: Contextualized Attention-based Deep Network for Conversational Question AnsweringCode0
Are you tough enough? Framework for Robustness Validation of Machine Comprehension SystemsCode0
Weighted Global Normalization for Multiple Choice Reading Comprehension over Long Documents0
Towards an Automatic Text Comprehension for the Arabic Question-Answering: Semantic and Logical Representation of Texts0
未登錄詞之向量表示法模型於中文機器閱讀理解之應用 (An OOV Word Embedding Framework for Chinese Machine Reading Comprehension)0
Visual Question Answering as Reading Comprehension0
Multi-granularity hierarchical attention fusion networks for reading comprehension and question answeringCode0
A Deep Cascade Model for Multi-Document Reading Comprehension0
Recurrently Controlled Recurrent NetworksCode0
Convolutional Spatial Attention Model for Reading Comprehension with Multiple-Choice Questions0
Densely Connected Attention Propagation for Reading ComprehensionCode1
Implicit Argument Prediction as Reading ComprehensionCode0
Effective Subword Segmentation for Text ComprehensionCode0
Exploiting Explicit Paths for Multi-hop Reading ComprehensionCode0
Textual Entailment based Question Generation0
Work Smart - Reducing Effort in Short-Answer Grading0
Normalization in Context: Inter-Annotator Agreement for Meaning-Based Target Hypothesis Annotation0
Automatic Opinion Question GenerationCode0
An Adaption of BIOASQ Question Answering dataset for Machine Reading systems by Manual Annotations of Answer Spans.0
UCL Machine Reading Group: Four Factor Framework For Fact Finding (HexaF)0
Team SWEEPer: Joint Sentence Extraction and Fact Checking with Pointer Networks0
Visual Interrogation of Attention-Based Models for Natural Language Inference and Machine Comprehension0
Learning to Describe Phrases with Local and Global ContextsCode0
Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension0
Show:102550
← PrevPage 26 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified