SOTAVerified

Machine Reading Comprehension

Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.

Source: Making Neural Machine Reading Comprehension Faster

Papers

Showing 301350 of 555 papers

TitleStatusHype
To Test Machine Comprehension, Start by Defining Comprehension0
Towards AMR-BR: A SemBank for Brazilian Portuguese Language0
Towards Building a Robust Knowledge Intensive Question Answering Model with Large Language Models0
Towards Confident Machine Reading Comprehension0
Towards Inference-Oriented Reading Comprehension: ParallelQA0
Towards Medical Machine Reading Comprehension with Structural Knowledge and Plain Text0
Towards Robust Neural Retrieval Models with Synthetic Pre-Training0
To What Extent Do Natural Language Understanding Datasets Correlate to Logical Reasoning? A Method for Diagnosing Logical Reasoning.0
Transfer Learning Enhanced Single-choice Decision for Multi-choice Question Answering0
Trigger-free Event Detection via Derangement Reading Comprehension0
U3E: Unsupervised and Erasure-based Evidence Extraction for Machine Reading Comprehension0
Uncertainty-Based Adaptive Learning for Reading Comprehension0
Understanding Attention in Machine Reading Comprehension0
Unsupervised Domain Adaptation on Question-Answering System with Conversation Data0
Unsupervised Explanation Generation for Machine Reading Comprehension0
Unsupervised Open-Domain Question Answering0
Unsupervised Open-Domain Question Answering with Higher Answerability0
UQuAD1.0: Development of an Urdu Question Answering Training Data for Machine Reading Comprehension0
Using Adversarial Attacks to Reveal the Statistical Bias in Machine Reading Comprehension Models0
Using calibrator to improve robustness in Machine Reading Comprehension0
VAULT: VAriable Unified Long Text Representation for Machine Reading Comprehension0
View Dialogue in 2D: A Two-stream Model in Time-speaker Perspective for Dialogue Summarization and beyond0
ViQA-COVID: COVID-19 Machine Reading Comprehension Dataset for Vietnamese0
Visualizing attention zones in machine reading comprehension models0
Visual Question Answering as Reading Comprehension0
VLSP 2021 - ViMRC Challenge: Vietnamese Machine Reading Comprehension0
Weakly Supervised Neuro-Symbolic Module Networks for Numerical Reasoning0
未登錄詞之向量表示法模型於中文機器閱讀理解之應用 (An OOV Word Embedding Framework for Chinese Machine Reading Comprehension)0
What does BERT Learn from Arabic Machine Reading Comprehension Datasets?0
What If Sentence-hood is Hard to Define: A Case Study in Chinese Reading Comprehension0
What is Missing in Existing Multi-hop Datasets? Toward Deeper Multi-hop Reasoning Task0
What Makes Machine Reading Comprehension Questions Difficult? Investigating Variation in Passage Sources and Question Types0
Why can't memory networks read effectively?0
WikiPossessions: Possession Timeline Generation as an Evaluation Benchmark for Machine Reading Comprehension of Long Texts0
XCMRC: Evaluating Cross-lingual Machine Reading Comprehension0
XLMRQA: Open-Domain Question Answering on Vietnamese Wikipedia-based Textual Knowledge Source0
Yimmon at SemEval-2019 Task 9: Suggestion Mining with Hybrid Augmented Approaches0
YNU\_AI1799 at SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge of Different model ensemble0
Zero-Shot Estimation of Base Models' Weights in Ensemble of Machine Reading Comprehension Systems for Robust Generalization0
Evaluating the Robustness of Machine Reading Comprehension Models to Low Resource Entity Renaming0
Evaluation Metrics for Machine Reading Comprehension: Prerequisite Skills and Readability0
Evaluation of Dataset Selection for Pre-Training and Fine-Tuning Transformer Language Models for Clinical Question Answering0
Evaluation of Instruction-Following Ability for Large Language Models on Story-Ending Generation0
EveMRC: A Two-stage Evidence Modeling For Multi-choice Machine Reading Comprehension0
Event Detection via Derangement Reading Comprehension0
Event Extraction as Machine Reading Comprehension0
Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension0
Explicit Utilization of General Knowledge in Machine Reading Comprehension0
Feature-augmented Machine Reading Comprehension with Auxiliary Tasks0
Feeding What You Need by Understanding What You Learned0
Show:102550
← PrevPage 7 of 12Next →

No leaderboard results yet.