SOTAVerified

ModulOM: Disseminating Deep Learning Research with Modular Output Mathematics

2021-03-18ICLR Workshop Rethinking_ML_Papers 2021Code Available0· sign in to hype

Maxime Istasse, Kim Mens, Christophe De Vleeschouwer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Solving a task with a deep neural network requires an appropriate formulation of the underlying inference problem. A formulation defines the type of variables output by the network, but also the set of variables and functions, denoted output mathematics, needed to turn those outputs into task-relevant predictions. Despite the fact that the task performance may largely depend on the formulation, most deep learning experiment repositories do not offer a convenient solution to explore formulation variants in a flexible and incremental manner. Software components for neural network creation, parameter optimization or data augmentation, in contrast, offer some degree of modularity that has proved to facilitate the transfer of know-how associated to model development. But this is not the case for output mathematics. Our paper proposes to address this limitation by embedding the output mathematics in a modular component as well, by building on multiple inheritance principles in object-oriented programming. The flexibility offered by the proposed component and its added value in terms of knowledge dissemination are demonstrated in the context of the Panoptic-Deeplab method, a representative computer vision use case.

Tasks

Reproductions