Contextual Classification Using Self-Supervised Auxiliary Models for Deep Neural Networks
Sebastian Palacio, Philipp Engler, Jörn Hees, Andreas Dengel
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Engler93/Self-Supervised-Autogenous-LearningOfficialpytorch★ 2
Abstract
Classification problems solved with deep neural networks (DNNs) typically rely on a closed world paradigm, and optimize over a single objective (e.g., minimization of the cross-entropy loss). This setup dismisses all kinds of supporting signals that can be used to reinforce the existence or absence of a particular pattern. The increasing need for models that are interpretable by design makes the inclusion of said contextual signals a crucial necessity. To this end, we introduce the notion of Self-Supervised Autogenous Learning (SSAL) models. A SSAL objective is realized through one or more additional targets that are derived from the original supervised classification task, following architectural principles found in multi-task learning. SSAL branches impose low-level priors into the optimization process (e.g., grouping). The ability of using SSAL branches during inference, allow models to converge faster, focusing on a richer set of class-relevant features. We show that SSAL models consistently outperform the state-of-the-art while also providing structured predictions that are more interpretable.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CIFAR-100 | SSAL-DenseNet 190-40 | Percentage correct | 83.2 | — | Unverified |
| ImageNet | SSAL-Resnet50 | Top 1 Accuracy | 77 | — | Unverified |