SOTAVerified

Machine Reading Comprehension

Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.

Source: Making Neural Machine Reading Comprehension Faster

Papers

Showing 301325 of 555 papers

TitleStatusHype
Machine Reading Comprehension with Enhanced Linguistic Verifiers0
ChemistryQA: A Complex Question Answering Dataset from Chemistry0
Coreference Reasoning in Machine Reading ComprehensionCode0
ECONET: Effective Continual Pretraining of Language Models for Event Temporal ReasoningCode1
SG-Net: Syntax Guided Transformer for Language Representation0
Adaptive Bi-directional Attention: Exploring Multi-Granularity Representations for Machine Reading Comprehension0
From Bag of Sentences to Document: Distantly Supervised Relation Extraction via Machine Reading ComprehensionCode0
Semantics Altering Modifications for Evaluating Comprehension in Machine ReadingCode0
KgPLM: Knowledge-guided Language Model Pre-training via Generative and Discriminative Learning0
Reference Knowledgeable Network for Machine Reading ComprehensionCode0
End-to-End QA on COVID-19: Domain Adaptation with Synthetic Training0
Seeing the World through Text: Evaluating Image Descriptions for Commonsense Reasoning in Machine Reading Comprehension0
Read and Reason with MuSeRC and RuCoS: Datasets for Machine Reading Comprehension for Russian0
A Multilingual Reading Comprehension System for more than 100 Languages0
Multi-choice Relational Reasoning for Machine Reading Comprehension0
Incorporating Syntax and Frame Semantics in Neural Network for Machine Reading Comprehension0
ForceReader: a BERT-based Interactive Machine Reading Comprehension Model with Attention Separation0
Graph-Based Knowledge Integration for Question Answering over Dialogue0
SQL Generation via Machine Reading ComprehensionCode0
Bi-directional CognitiveThinking Network for Machine Reading Comprehension0
Robust Machine Reading Comprehension by Learning Soft labels0
A Vietnamese Dataset for Evaluating Machine Reading Comprehension0
Learn with Noisy Data via Unsupervised Loss Correction for Weakly Supervised Reading Comprehension0
MRC Examples Answerable by BERT without a Question Are Less Effective in MRC Model Training0
FPAI at SemEval-2020 Task 10: A Query Enhanced Model with RoBERTa for Emphasis Selection0
Show:102550
← PrevPage 13 of 23Next →

No leaderboard results yet.