SOTAVerified

Dropping Networks for Transfer Learning

2018-04-23Unverified0· sign in to hype

James O' Neill, Danushka Bollegala

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Many tasks in natural language understanding require learning relationships between two sequences for various tasks such as natural language inference, paraphrasing and entailment. These aforementioned tasks are similar in nature, yet they are often modeled individually. Knowledge transfer can be effective for closely related tasks. However, transferring all knowledge, some of which irrelevant for a target task, can lead to sub-optimal results due to negative transfer. Hence, this paper focuses on the transferability of both instances and parameters across natural language understanding tasks by proposing an ensemble-based transfer learning method. The primary contribution of this paper is the combination of both Dropout and Bagging for improved transferability in neural networks, referred to as Dropping herein. We present a straightforward yet novel approach for incorporating source Dropping Networks to a target task for few-shot learning that mitigates negative transfer. This is achieved by using a decaying parameter chosen according to the slope changes of a smoothed spline error curve at sub-intervals during training. We compare the proposed approach against hard parameter sharing and soft parameter sharing transfer methods in the few-shot learning case. We also compare against models that are fully trained on the target task in the standard supervised learning setup. The aforementioned adjustment leads to improved transfer learning performance and comparable results to the current state of the art only using a fraction of the data from the target task.

Tasks

Reproductions