SOTAVerified

Sharper Risk Bound for Multi-Task Learning with Multi-Graph Dependent Data

2025-02-25Unverified0· sign in to hype

Xiao Shao, Guoqiang Wu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In multi-task learning (MTL) with each task involving graph-dependent data, existing generalization analyses yield a sub-optimal risk bound of O(1n), where n is the number of training samples of each task. However, to improve the risk bound is technically challenging, which is attributed to the lack of a foundational sharper concentration inequality for multi-graph dependent random variables. To fill up this gap, this paper proposes a new Bennett-type inequality, enabling the derivation of a sharper risk bound of O( nn). Technically, building on the proposed Bennett-type inequality, we propose a new Talagrand-type inequality for the empirical process, and further develop a new analytical framework of the local fractional Rademacher complexity to enhance generalization analyses in MTL with multi-graph dependent data. Finally, we apply the theoretical advancements to applications such as Macro-AUC optimization, illustrating the superiority of our theoretical results over prior work, which is also verified by experimental results.

Tasks

Reproductions