SOTAVerified

Machine Reading Comprehension

Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.

Source: Making Neural Machine Reading Comprehension Faster

Papers

Showing 151200 of 555 papers

TitleStatusHype
Cross-Lingual Question Answering over Knowledge Base as Reading ComprehensionCode0
Natural Response Generation for Chinese Reading ComprehensionCode0
The Impacts of Unanswerable Questions on the Robustness of Machine Reading Comprehension Models0
KILDST: Effective Knowledge-Integrated Learning for Dialogue State Tracking using Gazetteer and Speaker Information0
Integrating Semantic Information into Sketchy Reading Module of Retro-Reader for Vietnamese Machine Reading Comprehension0
Bridging The Gap: Entailment Fused-T5 for Open-retrieval Conversational Machine Reading Comprehension0
Rethinking Label Smoothing on Multi-hop Question AnsweringCode0
Medical Knowledge Graph QA for Drug-Drug Interaction Prediction based on Multi-hop Machine Reading Comprehension0
From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine ReaderCode0
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Approaches0
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Datasets and Metrics0
Feature-augmented Machine Reading Comprehension with Auxiliary Tasks0
IDK-MRC: Unanswerable Questions for Indonesian Machine Reading ComprehensionCode0
Rethinking Annotation: Can Language Learners Contribute?0
CSS: Combining Self-training and Self-supervised Learning for Few-shot Dialogue State Tracking0
U3E: Unsupervised and Erasure-based Evidence Extraction for Machine Reading Comprehension0
Modular Approach to Machine Reading Comprehension: Mixture of Task-Aware Experts0
To What Extent Do Natural Language Understanding Datasets Correlate to Logical Reasoning? A Method for Diagnosing Logical Reasoning.0
基于话头话体共享结构信息的机器阅读理解研究(Rearch on Machine reading comprehension based on shared structure information between Naming and Telling)0
View Dialogue in 2D: A Two-stream Model in Time-speaker Perspective for Dialogue Summarization and beyond0
DoSEA: A Domain-specific Entity-aware Framework for Cross-Domain Named Entity RecogitionCode0
Document-level Event Factuality Identification via Machine Reading Comprehension Frameworks with Transfer Learning0
DIFM:An effective deep interaction and fusion model for sentence matching0
Aspect-based Sentiment Analysis as Machine Reading Comprehension0
基于相似度进行句子选择的机器阅读理解数据增强(Machine reading comprehension data Augmentation for sentence selection based on similarity)0
Robust Domain Adaptation for Machine Reading Comprehension0
ET5: A Novel End-to-end Framework for Conversational Machine Reading ComprehensionCode0
A Survey on Measuring and Mitigating Reasoning Shortcuts in Machine Reading Comprehension0
Unsupervised Domain Adaptation on Question-Answering System with Conversation Data0
Large-scale Multi-granular Concept Extraction Based on Machine Reading ComprehensionCode0
Trigger-free Event Detection via Derangement Reading Comprehension0
Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension0
Continual Machine Reading Comprehension via Uncertainty-aware Fixed Memory and Adversarial Domain Adaptation0
Act-Aware Slot-Value Predicting in Multi-Domain Dialogue State TrackingCode0
To Answer or Not to Answer? Improving Machine Reading Comprehension Model with Span-based Contrastive Learning0
MRCLens: an MRC Dataset Bias Detection Toolkit0
Exploiting Word Semantics to Enrich Character Representations of Chinese Pre-trained ModelsCode0
An Understanding-Oriented Robust Machine Reading Comprehension ModelCode0
JBNU-CCLab at SemEval-2022 Task 12: Machine Reading Comprehension and Span Pair Classification for Linking Mathematical Symbols to Their DescriptionsCode0
OPERA: Operation-Pivoted Discrete Reasoning over TextCode0
Collecting high-quality adversarial data for machine reading comprehension tasks with humans and models in the loop0
Contextual embedding and model weighting by fusing domain knowledge on Biomedical Question AnsweringCode0
Adversarial Self-Attention for Language UnderstandingCode0
GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions0
DTW at Qur’an QA 2022: Utilising Transfer Learning with Transformers for Question Answering in a Low-resource DomainCode0
HRCA+: Advanced Multiple-choice Machine Reading Comprehension Method0
Automatic Word Segmentation and Part-of-Speech Tagging of Ancient Chinese Based on BERT Model0
Detecting Causes of Stock Price Rise and Decline by Machine Reading Comprehension with BERT0
Qur’an QA 2022: Overview of The First Shared Task on Question Answering over the Holy Qur’an0
NER-MQMRC: Formulating Named Entity Recognition as Multi Question Machine Reading Comprehension0
Show:102550
← PrevPage 4 of 12Next →

No leaderboard results yet.