SOTAVerified

Machine Reading Comprehension

Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.

Source: Making Neural Machine Reading Comprehension Faster

Papers

Showing 150 of 555 papers

TitleStatusHype
Pre-Training with Whole Word Masking for Chinese BERTCode3
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language VariantsCode2
MiniRBT: A Two-stage Distilled Small Chinese Pre-trained ModelCode2
CLUE: A Chinese Language Understanding Evaluation BenchmarkCode2
Multi-Grained Query-Guided Set Prediction Network for Grounded Multimodal Named Entity RecognitionCode1
ChroniclingAmericaQA: A Large-scale Question Answering Dataset based on Historical American Newspaper PagesCode1
ArabicaQA: A Comprehensive Dataset for Arabic Question AnsweringCode1
Mirror: A Universal Framework for Various Information Extraction TasksCode1
MPrompt: Exploring Multi-level Prompt Tuning for Machine Reading ComprehensionCode1
IDOL: Indicator-oriented Logic Pre-training for Logical ReasoningCode1
Sentence-level Event Detection without Triggers via Prompt Learning and Machine Reading ComprehensionCode1
Modeling Hierarchical Reasoning Chains by Linking Discourse Units and Key Phrases for Reading ComprehensionCode1
NorQuAD: Norwegian Question Answering DatasetCode1
Context-faithful Prompting for Large Language ModelsCode1
Orca: A Few-shot Benchmark for Chinese Conversational Machine Reading ComprehensionCode1
GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and AugmentationCode1
NEREL-BIO: A Dataset of Biomedical Abstracts Annotated with Nested Named EntitiesCode1
Multitask Pre-training of Modular Prompt for Chinese Few-Shot LearningCode1
A Multi-turn Machine Reading Comprehension Framework with Rethink Mechanism for Emotion-Cause Pair ExtractionCode1
End-to-End Chinese Speaker IdentificationCode1
A Robustly Optimized BMRC for Aspect Sentiment Triplet ExtractionCode1
FinBERT-MRC: financial named entity recognition using BERT under the machine reading comprehension paradigmCode1
Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical ReasoningCode1
Learning Disentangled Semantic Representations for Zero-Shot Cross-Lingual Transfer in Multilingual Machine Reading ComprehensionCode1
AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading ComprehensionCode1
Relational Surrogate Loss LearningCode1
JaQuAD: Japanese Question Answering Dataset for Machine Reading ComprehensionCode1
On the Robustness of Reading Comprehension Models to Entity RenamingCode1
Tracing Origins: Coreference-aware Machine Reading ComprehensionCode1
MoEfication: Transformer Feed-forward Layers are Mixtures of ExpertsCode1
MultiDoc2Dial: Modeling Dialogues Grounded in Multiple DocumentsCode1
CodeQA: A Question Answering Dataset for Source Code ComprehensionCode1
Context-NER : Contextual Phrase Generation at ScaleCode1
An MRC Framework for Semantic Role LabelingCode1
RoR: Read-over-Read for Long Document Machine Reading ComprehensionCode1
KELM: Knowledge Enhanced Pre-Trained Language Representations with Message Passing on Hierarchical Relational GraphsCode1
Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading ComprehensionCode1
Interactive Machine Comprehension with Dynamic Knowledge GraphsCode1
FewCLUE: A Chinese Few-shot Learning Evaluation BenchmarkCode1
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin InformationCode1
Why Machine Reading Comprehension Models Learn Shortcuts?Code1
SemEval-2021 Task 4: Reading Comprehension of Abstract MeaningCode1
Fact-driven Logical Reasoning for Machine Reading ComprehensionCode1
KLUE: Korean Language Understanding EvaluationCode1
Dependency Parsing as MRC-based Span-Span PredictionCode1
ExpMRC: Explainability Evaluation for Machine Reading ComprehensionCode1
ESTER: A Machine Reading Comprehension Dataset for Event Semantic Relation ReasoningCode1
Connecting Attributions and QA Model Behavior on Realistic CounterfactualsCode1
Bidirectional Machine Reading Comprehension for Aspect Sentiment Triplet ExtractionCode1
Cooperative Self-training of Machine Reading ComprehensionCode1
Show:102550
← PrevPage 1 of 12Next →

No leaderboard results yet.