SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 150 of 1760 papers

TitleStatusHype
DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image Segmentation through Loopback SynergyCode1
Chaining Event Spans for Temporal Relation GroundingCode0
S2ST-Omni: An Efficient and Scalable Multilingual Speech-to-Speech Translation Framework via Seamless Speech-Text Alignment and Streaming Speech Generation0
CoMuMDR: Code-mixed Multi-modal Multi-domain corpus for Discourse paRsing in conversationsCode0
Automatic Generation of Inference Making Questions for Reading Comprehension AssessmentsCode0
SCOP: Evaluating the Comprehension Process of Large Language Models from a Cognitive View0
Prosodic Structure Beyond Lexical Content: A Study of Self-Supervised Learning0
Dynamic Chunking and Selection for Reading Comprehension of Ultra-Long Context in Large Language ModelsCode0
What Has Been Lost with Synthetic Evaluation?0
ReadBench: Measuring the Dense Text Visual Reading Ability of Vision-Language ModelsCode1
Enhancing Text-to-Image Diffusion Transformer via Split-Text Conditioning0
SELF: Self-Extend the Context Length With Logistic Growth FunctionCode0
A Participatory Strategy for AI Ethics in Education and Rehabilitation grounded in the Capability Approach0
Social Bias in Popular Question-Answering Benchmarks0
Interpretable Traces, Unexpected Outcomes: Investigating the Disconnect in Trace-Based Knowledge Distillation0
Learning Graph Representation of Agent DiffusersCode0
Synthesize-on-Graph: Knowledgeable Synthetic Data Generation for Continue Pre-training of Large Language Models0
Using LLMs in Generating Design Rationale for Software Architecture DecisionsCode0
LLM-as-a-Judge: Reassessing the Performance of LLMs in Extractive QACode0
GOAT-TTS: Expressive and Realistic Speech Generation via A Dual-Branch LLM0
Understanding LLMs' Cross-Lingual Context Retrieval: How Good It Is And Where It Comes FromCode0
Efficient Tuning of Large Language Models for Knowledge-Grounded Dialogue GenerationCode0
Do LLMs Understand Your Translations? Evaluating Paragraph-level MT with Question AnsweringCode0
FactGuard: Leveraging Multi-Agent Systems to Generate Answerable and Unanswerable Questions for Enhanced Long-Context LLM ExtractionCode0
Locations of Characters in Narratives: Andersen and Persuasion DatasetsCode0
Comment Staytime Prediction with LLM-enhanced Comment UnderstandingCode0
Do Chinese models speak Chinese languages?0
Evaluating Multimodal Language Models as Visual Assistants for Visually Impaired Users0
Investigating Recent Large Language Models for Vietnamese Machine Reading Comprehension0
HICD: Hallucination-Inducing via Attention Dispersion for Contrastive Decoding to Mitigate Hallucinations in Large Language ModelsCode0
MRCEval: A Comprehensive, Challenging and Accessible Machine Reading Comprehension BenchmarkCode0
Zero-Shot Complex Question-Answering on Long Scientific DocumentsCode0
HoT: Highlighted Chain of Thought for Referencing Supporting Facts from Inputs0
Causal Tree Extraction from Medical Case Reports: A Novel Task for Experts-like Text Comprehension0
Exploring the Potential of Large Language Models for Estimating the Reading Comprehension Question Difficulty0
Pay Attention to Real World Perturbations! Natural Robustness Evaluation in Machine Reading Comprehension0
Unveiling Cultural Blind Spots: Analyzing the Limitations of mLLMs in Procedural Text Comprehension0
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and Instruction-FollowingCode0
Eye Tracking Based Cognitive Evaluation of Automatic Readability Assessment Measures0
Unknown Word Detection for English as a Second Language (ESL) Learners Using Gaze and Pre-trained Language Models0
Selective Self-to-Supervised Fine-Tuning for Generalization in Large Language Models0
The Use of Artificial Intelligence Tools in Assessing Content Validity: A Comparative Study with Human Experts0
General Embedding vs. Task-Specific Embedding: A Comparative Approach to Enhancing NLP Performance0
A linguistically-motivated evaluation methodology for unraveling model's abilities in reading comprehension tasks0
Automatic Feedback Generation for Short Answer Questions using Answer Diagnostic Graphs0
LongReason: A Synthetic Long-Context Reasoning Benchmark via Context Expansion0
Few-shot Policy (de)composition in Conversational Question Answering0
SimLabel: Consistency-Guided OOD Detection with Pretrained Vision-Language ModelsCode0
A Coordination-based Approach for Focused Learning in Knowledge-Based Systems0
Unlocking the Potential of Multiple BERT Models for Bangla Question Answering in NCTB Textbooks0
Show:102550
← PrevPage 1 of 36Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified