SOTAVerified

Machine Reading Comprehension

Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.

Source: Making Neural Machine Reading Comprehension Faster

Papers

Showing 501550 of 555 papers

TitleStatusHype
Review Conversational Reading ComprehensionCode0
Effective Subword Segmentation for Text ComprehensionCode0
Interactive Machine Comprehension with Information Seeking AgentsCode0
DuReader_robust: A Chinese Dataset Towards Evaluating Robustness and Generalization of Machine Reading Comprehension in Real-World ApplicationsCode0
A Causal View of Entity Bias in (Large) Language ModelsCode0
RoBIn: A Transformer-Based Model For Risk Of Bias Inference With Machine Reading ComprehensionCode0
Neural Arabic Question AnsweringCode0
Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous SpaceCode0
Interpreting Themes from Educational StoriesCode0
Text Understanding with the Attention Sum Reader NetworkCode0
A Framework for Evaluation of Machine Reading Comprehension Gold StandardsCode0
Is the Understanding of Explicit Discourse Relations Required in Machine Reading Comprehension?Code0
DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world ApplicationsCode0
JBNU-CCLab at SemEval-2022 Task 12: Machine Reading Comprehension and Span Pair Classification for Linking Mathematical Symbols to Their DescriptionsCode0
Adversarial Self-Attention for Language UnderstandingCode0
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and Instruction-FollowingCode0
Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity LinkingCode0
CliCR: A Dataset of Clinical Case Reports for Machine Reading ComprehensionCode0
The Impact of Cross-Lingual Adjustment of Contextual Word Representations on Zero-Shot TransferCode0
NumNet: Machine Reading Comprehension with Numerical ReasoningCode0
Dual Ask-Answer Network for Machine Reading ComprehensionCode0
Abstract, Rationale, Stance: A Joint Model for Scientific Claim VerificationCode0
SDNet: Contextualized Attention-based Deep Network for Conversational Question AnsweringCode0
Adaptive loose optimization for robust question answeringCode0
DTW at Qur’an QA 2022: Utilising Transfer Learning with Transformers for Question Answering in a Low-resource DomainCode0
Knowing-how & Knowing-that: A New Task for Machine Comprehension of User ManualsCode0
Knowledge Aware Conversation Generation with Explainable Reasoning over Augmented GraphsCode0
Cheap and Good? Simple and Effective Data Augmentation for Low Resource Machine ReadingCode0
Knowledge-Empowered Representation Learning for Chinese Medical Reading Comprehension: Task, Model and ResourcesCode0
Automatic Task Requirements Writing Evaluation via Machine Reading ComprehensionCode0
Self Question-answering: Aspect-based Sentiment Analysis by Role Flipped Machine Reading ComprehensionCode0
VlogQA: Task, Dataset, and Baseline Models for Vietnamese Spoken-Based Machine Reading ComprehensionCode0
Large-scale Multi-granular Concept Extraction Based on Machine Reading ComprehensionCode0
DTW at Qur'an QA 2022: Utilising Transfer Learning with Transformers for Question Answering in a Low-resource DomainCode0
Learning Semantic Sentence Embeddings using Sequential Pair-wise DiscriminatorCode0
OPERA: Operation-Pivoted Discrete Reasoning over TextCode0
Semantics Altering Modifications for Evaluating Comprehension in Machine ReadingCode0
From Multiple-Choice to Extractive QA: A Case Study for English and ArabicCode0
Semantics-aware BERT for Language UnderstandingCode0
Towards Efficient Methods in Medical Question Answering using Knowledge Graph EmbeddingsCode0
Act-Aware Slot-Value Predicting in Multi-Domain Dialogue State TrackingCode0
ZeQR: Zero-shot Query Reformulation for Conversational SearchCode0
Lite Unified Modeling for Discriminative Reading ComprehensionCode0
DRCD: a Chinese Machine Reading Comprehension DatasetCode0
A Span-Extraction Dataset for Chinese Machine Reading ComprehensionCode0
DoSEA: A Domain-specific Entity-aware Framework for Cross-Domain Named Entity RecogitionCode0
Document Modeling with External Attention for Sentence ExtractionCode0
Improving the Robustness of QA Models to Challenge Sets with Variational Question-Answer Pair GenerationCode0
An Understanding-Oriented Robust Machine Reading Comprehension ModelCode0
Building Large Machine Reading-Comprehension Datasets using Paragraph VectorsCode0
Show:102550
← PrevPage 11 of 12Next →

No leaderboard results yet.