SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 9511000 of 1760 papers

TitleStatusHype
Set Expansion using Sibling Relations between Semantic Categories0
SF-DST: Few-Shot Self-Feeding Reading Comprehension Dialogue State Tracking with Auxiliary Task0
SG-Net: Syntax Guided Transformer for Language Representation0
Sharing, Teaching and Aligning: Knowledgeable Transfer Learning for Cross-Lingual Machine Reading Comprehension0
Short Answer Assessment: Establishing Links Between Research Strands0
Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives0
Simple yet Effective Bridge Reasoning for Open-Domain Multi-Hop Question Answering0
Simplifying metaphorical language for young readers: A corpus study on news text0
Single-Turn Debate Does Not Help Humans Answer Hard Reading-Comprehension Questions0
Six Good Predictors of Autistic Text Comprehension0
SKETCH: Structured Knowledge Enhanced Text Comprehension for Holistic Retrieval0
SkillQG: Learning to Generate Question for Reading Comprehension Assessment0
Slot Filling for Biomedical Information Extraction0
Smarnet: Teaching Machines to Read and Comprehend Like Human0
S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension0
Social Bias in Popular Question-Answering Benchmarks0
SOCIALITE-LLAMA: An Instruction-Tuned Model for Social Scientific Tasks0
Somm: Into the Model0
SpatialNLI: A Spatial Domain Natural Language Interface to Databases Using Spatial Comprehension0
Specifying and Annotating Reduced Argument Span Via QA-SRL0
sPhinX: Sample Efficient Multilingual Instruction Fine-Tuning Through N-shot Guided Prompting0
Splitting Complex English Sentences0
SQuAD2-CR: Semi-supervised Annotation for Cause and Rationales for Unanswerability in SQuAD 2.00
Squibs: What Is a Paraphrase?0
SRDF: Extracting Lexical Knowledge Graph for Preserving Sentence Meaning0
Stanford MLab at SemEval-2021 Task 1: Tree-Based Modelling of Lexical Complexity using Word Embeddings0
Stars at Qur’an QA 2022: Building Automatic Extractive Question Answering Systems for the Holy Qur’an with Transformer Models and Releasing a New Dataset0
Step out of KG: Knowledge Graph Completion via Knowledgeable Retrieval and Reading Comprehension0
Story Comprehension for Predicting What Happens Next0
Structsum Generation for Faster Text Comprehension0
Structural Characterization for Dialogue Disentanglement0
Structural Embedding of Syntactic Trees for Machine Comprehension0
Structured Prediction for Joint Class Cardinality and Entity Property Inference in Model-Complete Text Comprehension0
Structured Pruning of Recurrent Neural Networks through Neuron Selection0
Struct-X: Enhancing Large Language Models Reasoning with Structured Data0
Eye Tracking Based Cognitive Evaluation of Automatic Readability Assessment Measures0
Swanson linking revisited: Accelerating literature-based discovery across domains using a conceptual influence graph0
Syntactic and Lexical Approaches to Reading Comprehension0
Syntactic Cross and Reading Effort in English to Japanese Translation0
Synthesize-on-Graph: Knowledgeable Synthetic Data Generation for Continue Pre-training of Large Language Models0
Systematic Error Analysis of the Stanford Question Answering Dataset0
Tackling Adversarial Examples in QA via Answer Sentence Selection0
TA-MAMC at SemEval-2021 Task 4: Task-adaptive Pretraining and Multi-head Attention for Abstract Meaning Reading Comprehension0
TangoBERT: Reducing Inference Cost by using Cascaded Architecture0
Teach model to answer questions after comprehending the document0
Team Solomon at SemEval-2020 Task 4: Be Reasonable: Exploiting Large-scale Language Models for Commonsense Reasoning0
TeamUFAL: WSD+EL as Document Retrieval0
Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension0
TextCaps: a Dataset for Image Captioning with Reading Comprehension0
Text Modification for Bulgarian Sign Language Users0
Show:102550
← PrevPage 20 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified