SOTAVerified

Negotiated Representations to Prevent Forgetting in Machine Learning Applications

2023-11-30Code Available0· sign in to hype

Nuri Korhan, Ceren Öner

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Catastrophic forgetting is a significant challenge in the field of machine learning, particularly in neural networks. When a neural network learns to perform well on a new task, it often forgets its previously acquired knowledge or experiences. This phenomenon occurs because the network adjusts its weights and connections to minimize the loss on the new task, which can inadvertently overwrite or disrupt the representations that were crucial for the previous tasks. As a result, the the performance of the network on earlier tasks deteriorates, limiting its ability to learn and adapt to a sequence of tasks. In this paper, we propose a novel method for preventing catastrophic forgetting in machine learning applications, specifically focusing on neural networks. Our approach aims to preserve the knowledge of the network across multiple tasks while still allowing it to learn new information effectively. We demonstrate the effectiveness of our method by conducting experiments on various benchmark datasets, including Split MNIST, Split CIFAR10, Split Fashion MNIST, and Split CIFAR100. These datasets are created by dividing the original datasets into separate, non overlapping tasks, simulating a continual learning scenario where the model needs to learn multiple tasks sequentially without forgetting the previous ones. Our proposed method tackles the catastrophic forgetting problem by incorporating negotiated representations into the learning process, which allows the model to maintain a balance between retaining past experiences and adapting to new tasks. By evaluating our method on these challenging datasets, we aim to showcase its potential for addressing catastrophic forgetting and improving the performance of neural networks in continual learning settings.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Split CIFAR-10Model with negotiation paradigmPercentage Average accuracy - 5 tasks46.5Unverified
split CIFAR-100Model with negotiation paradigmPercentage Average accuracy - 5 tasks34.9Unverified
Split Fashion M-NISTModel with negotiation paradigmPercentage Average accuracy - 5 tasks54.8Unverified
Split M-NISTModel with negotiation paradigmPercentage Average accuracy - 5 tasks82.3Unverified

Reproductions