SOTAVerified

CLMSM: A Multi-Task Learning Framework for Pre-training on Procedural Text

2023-10-22Code Available0· sign in to hype

Abhilash Nandy, Manav Nitin Kapadnis, Pawan Goyal, Niloy Ganguly

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we propose CLMSM, a domain-specific, continual pre-training framework, that learns from a large set of procedural recipes. CLMSM uses a Multi-Task Learning Framework to optimize two objectives - a) Contrastive Learning using hard triplets to learn fine-grained differences across entities in the procedures, and b) a novel Mask-Step Modelling objective to learn step-wise context of a procedure. We test the performance of CLMSM on the downstream tasks of tracking entities and aligning actions between two procedures on three datasets, one of which is an open-domain dataset not conforming with the pre-training dataset. We show that CLMSM not only outperforms baselines on recipes (in-domain) but is also able to generalize to open-domain procedural NLP tasks.

Tasks

Reproductions