SOTAVerified

Contextual Classification Using Self-Supervised Auxiliary Models for Deep Neural Networks

2021-01-07Code Available0· sign in to hype

Sebastian Palacio, Philipp Engler, Jörn Hees, Andreas Dengel

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Classification problems solved with deep neural networks (DNNs) typically rely on a closed world paradigm, and optimize over a single objective (e.g., minimization of the cross-entropy loss). This setup dismisses all kinds of supporting signals that can be used to reinforce the existence or absence of a particular pattern. The increasing need for models that are interpretable by design makes the inclusion of said contextual signals a crucial necessity. To this end, we introduce the notion of Self-Supervised Autogenous Learning (SSAL) models. A SSAL objective is realized through one or more additional targets that are derived from the original supervised classification task, following architectural principles found in multi-task learning. SSAL branches impose low-level priors into the optimization process (e.g., grouping). The ability of using SSAL branches during inference, allow models to converge faster, focusing on a richer set of class-relevant features. We show that SSAL models consistently outperform the state-of-the-art while also providing structured predictions that are more interpretable.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-100SSAL-DenseNet 190-40Percentage correct83.2Unverified
ImageNetSSAL-Resnet50Top 1 Accuracy77Unverified

Reproductions