SOTAVerified

Reading Comprehension

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.

Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.

Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.

The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets

Papers

Showing 676700 of 1760 papers

TitleStatusHype
Facial Electromyography-based Adaptive Virtual Reality Gaming for Cognitive Training0
Ensemble Learning-Based Approach for Improving Generalization Capability of Machine Reading Comprehension Systems0
Building A User-Centric and Content-Driven Socialbot0
Ensemble approach for natural language question answering problem0
Addressing Semantic Drift in Generative Question Answering with Auxiliary Extraction0
Enhancing Text-to-Image Diffusion Transformer via Split-Text Conditioning0
CJRC: A Reliable Human-Annotated Benchmark DataSet for Chinese Judicial Reading Comprehension0
CLCM - A Linguistic Resource for Effective Simplification of Instructions in the Crisis Management Domain and its Evaluations0
FCM: A Fine-grained Comparison Model for Multi-turn Dialogue Reasoning0
Feature-augmented Machine Reading Comprehension with Auxiliary Tasks0
Feature-Rich Two-Stage Logistic Regression for Monolingual Alignment0
BUAP: Evaluating Features for Multilingual and Cross-Level Semantic Textual Similarity0
Applications of BERT Based Sequence Tagging Models on Chinese Medical Text Attributes Extraction0
Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks0
Enhancing Robustness of Retrieval-Augmented Language Models with In-Context Learning0
Broad Context Language Modeling as Reading Comprehension0
Few-shot Mining of Naturally Occurring Inputs and Outputs0
Apples to Apples: Learning Semantics of Common Entities Through a Novel Comprehension Task0
Filling a Knowledge Graph with a Crowd0
Enhancing Pre-Trained Generative Language Models with Question Attended Span Extraction on Machine Reading Comprehension0
Enhancing Multiple-choice Machine Reading Comprehension by Punishing Illogical Interpretations0
Bridging The Gap: Entailment Fused-T5 for Open-retrieval Conversational Machine Reading Comprehension0
Clinical Reading Comprehension with Encoder-Decoder Models Enhanced by Direct Preference Optimization0
Clozer: Adaptable Data Augmentation for Cloze-style Reading Comprehension0
Graph-combined Coreference Resolution Methods on Conversational Machine Reading Comprehension with Pre-trained Language Model0
Show:102550
← PrevPage 28 of 71Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Rational Reasoner / IDOLTest80.6Unverified
2AMR-LE-EnsembleTest80Unverified
3MERIt(MERIt-deberta-v2-xxlarge )Test79.3Unverified
4MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42Test79.3Unverified
5Knowledge modelTest79.2Unverified
6DeBERTa-v2-xxlarge-AMR-LE-ContrapositionTest77.2Unverified
7LReasoner ensembleTest76.1Unverified
8ELECTRA and ALBERTTest71Unverified
9WWZTest69.7Unverified
10xlnet-large-uncased [extended data]Test69.3Unverified
#ModelMetricClaimedVerifiedStatus
1ALBERT (Ensemble)Accuracy91.4Unverified
2Megatron-BERT (ensemble)Accuracy90.9Unverified
3ALBERTxxlarge+DUMA(ensemble)Accuracy89.8Unverified
4Megatron-BERTAccuracy89.5Unverified
5XLNetAccuracy (Middle)88.6Unverified
6DeBERTalargeAccuracy86.8Unverified
7B10-10-10Accuracy85.7Unverified
8RoBERTaAccuracy83.2Unverified
9Orca 2-13BAccuracy82.87Unverified
10Orca 2-7BAccuracy80.79Unverified
#ModelMetricClaimedVerifiedStatus
1Golden TransformerAverage F10.94Unverified
2MT5 LargeAverage F10.84Unverified
3ruRoberta-large finetuneAverage F10.83Unverified
4ruT5-large-finetuneAverage F10.82Unverified
5Human BenchmarkAverage F10.81Unverified
6ruT5-base-finetuneAverage F10.77Unverified
7ruBert-large finetuneAverage F10.76Unverified
8ruBert-base finetuneAverage F10.74Unverified
9RuGPT3XL few-shotAverage F10.74Unverified
10RuGPT3LargeAverage F10.73Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-LargeOverall: F164.4Unverified
2BERT-LargeOverall: F162.7Unverified
3BiDAFOverall: F128.5Unverified
#ModelMetricClaimedVerifiedStatus
1BERTMSE0.05Unverified
#ModelMetricClaimedVerifiedStatus
1BERT pretrained on MIMIC-IIIAnswer F163.55Unverified