SOTAVerified

Machine Reading Comprehension

Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.

Source: Making Neural Machine Reading Comprehension Faster

Papers

Showing 401450 of 555 papers

TitleStatusHype
A Sentence Cloze Dataset for Chinese Machine Reading ComprehensionCode1
Improving the Robustness of QA Models to Challenge Sets with Variational Question-Answer Pair GenerationCode0
Benchmarking Machine Reading Comprehension: A Psychological Perspective0
Graph Sequential Network for Reasoning over Sequences0
TREC CAsT 2019: The Conversational Assistance Track OverviewCode1
A Framework for Evaluation of Machine Reading Comprehension Gold StandardsCode0
GenNet : Reading Comprehension with Multiple Choice Questions using Generation and Selection model0
Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension0
FQuAD: French Question Answering Dataset0
ReClor: A Reading Comprehension Dataset Requiring Logical ReasoningCode1
Asking Questions the Human Way: Scalable Question-Answer Generation from Text CorpusCode1
Retrospective Reader for Machine Reading ComprehensionCode1
DUMA: Reading Comprehension with Transposition ThinkingCode1
A Study of the Tasks and Models in Machine Reading Comprehension0
Enhancing lexical-based approach with external knowledge for Vietnamese multiple-choice machine reading comprehension0
A BERT based Sentiment Analysis and Key Entity Detection Approach for Online Financial Texts0
A Survey on Machine Reading Comprehension Systems0
Dual Multi-head Co-attention for Multi-choice Reading Comprehension0
ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine Reading Comprehension0
CJRC: A Reliable Human-Annotated Benchmark DataSet for Chinese Judicial Reading Comprehension0
An End-to-End Dialogue State Tracking System with Machine Reading Comprehension and Wide & Deep Classification0
Label Dependent Deep Variational Paraphrase Generation0
Assessing the Benchmarking Capacity of Machine Reading Comprehension Datasets0
Robust Reading Comprehension with Linguistic Constraints via Posterior Regularization0
Improving Machine Reading Comprehension via Adversarial Training0
An Annotation Scheme of A Large-scale Multi-party Dialogues Dataset for Discourse Parsing and Machine Comprehension0
Ask to Learn: A Study on Curiosity-driven Question Generation0
Dice Loss for Data-imbalanced NLP TasksCode0
Coreference Resolution as Query-based Span PredictionCode1
Machine Reading Comprehension Using Structural Knowledge Graph-aware Network0
Relation Module for Non-Answerable Predictions on Reading Comprehension0
Improving Pre-Trained Multilingual Model with Vocabulary Expansion0
Pingan Smart Health and SJTU at COIN - Shared Task: utilizing Pre-trained Language Models and Common-sense Knowledge in Machine Reading Tasks0
BLCU-NLP at COIN-Shared Task1: Stagewise Fine-tuning BERT for Commonsense Inference in Everyday Narrations0
CALOR-QUEST : generating a training corpus for Machine Reading Comprehension models from shallow semantic annotations0
On Making Reading Comprehension More Comprehensive0
Cross-Task Knowledge Transfer for Query-Based Text Summarization0
D-NET: A Pre-Training and Fine-Tuning Framework for Improving the Generalization of Machine Reading ComprehensionCode0
Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension0
Improving the Robustness of Deep Reading Comprehension Models by Leveraging Syntax Prior0
A Unified MRC Framework for Named Entity RecognitionCode1
Relation Module for Non-answerable Prediction on Question Answering0
Why can't memory networks read effectively?0
NumNet: Machine Reading Comprehension with Numerical ReasoningCode0
BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on NovelsCode0
AntMan: Sparse Low-Rank Compression to Accelerate RNN inference0
基於BERT模型之多國語言機器閱讀理解研究(Multilingual Machine Reading Comprehension based on BERT Model)0
MMM: Multi-stage Multi-task Learning for Multi-choice Reading ComprehensionCode0
Integrated Triaging for Fast Reading Comprehension0
Improving Pre-Trained Multilingual Models with Vocabulary Expansion0
Show:102550
← PrevPage 9 of 12Next →

No leaderboard results yet.