SOTAVerified

Deep Transfer Learning: Model Framework and Error Analysis

2024-10-12Unverified0· sign in to hype

Yuling Jiao, Huazhen Lin, Yuchen Luo, Jerry Zhijian Yang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper presents a framework for deep transfer learning, which aims to leverage information from multi-domain upstream data with a large number of samples n to a single-domain downstream task with a considerably smaller number of samples m, where m n, in order to enhance performance on downstream task. Our framework offers several intriguing features. First, it allows the existence of both shared and domain-specific features across multi-domain data and provides a framework for automatic identification, achieving precise transfer and utilization of information. Second, the framework explicitly identifies upstream features that contribute to downstream tasks, establishing clear relationships between upstream domains and downstream tasks, thereby enhancing interpretability. Error analysis shows that our framework can significantly improve the convergence rate for learning Lipschitz functions in downstream supervised tasks, reducing it from O(m^-12(d+2)+n^-12(d+2)) ("no transfer") to O(m^-12(d^*+3) + n^-12(d+2)) ("partial transfer"), and even to O(m^-1/2+n^-12(d+2)) ("complete transfer"), where d^* d and d is the dimension of the observed data. Our theoretical findings are supported by empirical experiments on image classification and regression datasets.

Tasks

Reproductions