SOTAVerified

Navigating the Trade-Off between Learning Efficacy and Processing Efficiency in Deep Neural Networks

2021-01-01Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

A number of training protocols in machine learning seek to enhance learning efficacy by training a single agent on multiple tasks in sequence. Sequential acquisition exploits the discovery of common structure between tasks in the form of shared representations to improve learning speed and generalization. The learning of shared representations, however, is known to impair the execution of multiple tasks in parallel. The parallel execution of tasks results in higher efficiency of processing and is promoted by separating representations between tasks to avoid processing interference. Here, we build on previous work involving shallow networks and simple task settings suggesting that there is a trade-off between learning efficacy and processing efficiency, mediated by the use of shared versus separated representations. We show that the same tension arises in deep networks and discuss a meta-learning algorithm for an agent to manage this trade-off in an unfamiliar environment. We display through different experiments that the agent is able to successfully optimize its training strategy as a function of the environment.

Tasks

Reproductions