SOTAVerified

Machine Reading Comprehension

Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.

Source: Making Neural Machine Reading Comprehension Faster

Papers

Showing 51100 of 555 papers

TitleStatusHype
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language VariantsCode2
Demonstration-based learning for few-shot biomedical named entity recognition under machine reading comprehensionCode0
Single-Sentence Reader: A Novel Approach for Addressing Answer Position BiasCode0
Integrating a Heterogeneous Graph with Entity-aware Self-attention using Relative Position Labels for Reading Comprehension Model0
ZeQR: Zero-shot Query Reformulation for Conversational SearchCode0
Teach model to answer questions after comprehending the document0
IDOL: Indicator-oriented Logic Pre-training for Logical ReasoningCode1
SciMRC: Multi-perspective Scientific Machine Reading Comprehension0
Sentence-level Event Detection without Triggers via Prompt Learning and Machine Reading ComprehensionCode1
Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity LinkingCode0
Modeling Hierarchical Reasoning Chains by Linking Discourse Units and Key Phrases for Reading ComprehensionCode1
Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language ModelsCode0
Improving Opinion-based Question Answering Systems Through Label Error Detection and Overwrite0
Knowing-how & Knowing-that: A New Task for Machine Comprehension of User ManualsCode0
How Many Answers Should I Give? An Empirical Study of Multi-Answer Reading ComprehensionCode0
A Causal View of Entity Bias in (Large) Language ModelsCode0
Machine Reading Comprehension using Case-based Reasoning0
mPMR: A Multilingual Pre-trained Machine Reader at ScaleCode0
EMBRACE: Evaluation and Modifications for Boosting RACECode0
SkillQG: Learning to Generate Question for Reading Comprehension Assessment0
NER-to-MRC: Named-Entity Recognition Completely Solving as Machine Reading Comprehension0
Adaptive loose optimization for robust question answeringCode0
Multi-View Graph Representation Learning for Answering Hybrid Numerical Reasoning QuestionCode0
NorQuAD: Norwegian Question Answering DatasetCode1
Information Extraction from Documents: Question Answering vs Token Classification in real-world setups0
Evaluating the Robustness of Machine Reading Comprehension Models to Low Resource Entity Renaming0
MiniRBT: A Two-stage Distilled Small Chinese Pre-trained ModelCode2
A Data-centric Framework for Improving Domain-specific Machine Reading Comprehension Datasets0
A Multiple Choices Reading Comprehension Corpus for Vietnamese Language EducationCode0
Context-faithful Prompting for Large Language ModelsCode1
Revealing Weaknesses of Vietnamese Language Models Through Unanswerable Questions in Machine Reading Comprehension0
Clinical Concept and Relation Extraction Using Prompt-based Machine Reading Comprehension0
LUKE-Graph: A Transformer-based Approach with Gated Relational Graph Attention for Cloze-style Reading Comprehension0
Orca: A Few-shot Benchmark for Chinese Conversational Machine Reading ComprehensionCode1
Cross-Lingual Question Answering over Knowledge Base as Reading ComprehensionCode0
Natural Response Generation for Chinese Reading ComprehensionCode0
The Impacts of Unanswerable Questions on the Robustness of Machine Reading Comprehension Models0
KILDST: Effective Knowledge-Integrated Learning for Dialogue State Tracking using Gazetteer and Speaker Information0
Integrating Semantic Information into Sketchy Reading Module of Retro-Reader for Vietnamese Machine Reading Comprehension0
Medical Knowledge Graph QA for Drug-Drug Interaction Prediction based on Multi-hop Machine Reading Comprehension0
Rethinking Label Smoothing on Multi-hop Question AnsweringCode0
Bridging The Gap: Entailment Fused-T5 for Open-retrieval Conversational Machine Reading Comprehension0
From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine ReaderCode0
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Approaches0
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Datasets and Metrics0
GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and AugmentationCode1
Feature-augmented Machine Reading Comprehension with Auxiliary Tasks0
IDK-MRC: Unanswerable Questions for Indonesian Machine Reading ComprehensionCode0
NEREL-BIO: A Dataset of Biomedical Abstracts Annotated with Nested Named EntitiesCode1
Multitask Pre-training of Modular Prompt for Chinese Few-Shot LearningCode1
Show:102550
← PrevPage 2 of 12Next →

No leaderboard results yet.