SOTAVerified

Theoretical Investigations and Practical Enhancements on Tail Task Risk Minimization in Meta Learning

2024-10-30Code Available0· sign in to hype

Yiqin Lv, Qi Wang, Dong Liang, Zheng Xie

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Meta learning is a promising paradigm in the era of large models and task distributional robustness has become an indispensable consideration in real-world scenarios. Recent advances have examined the effectiveness of tail task risk minimization in fast adaptation robustness improvement wang2023simple. This work contributes to more theoretical investigations and practical enhancements in the field. Specifically, we reduce the distributionally robust strategy to a max-min optimization problem, constitute the Stackelberg equilibrium as the solution concept, and estimate the convergence rate. In the presence of tail risk, we further derive the generalization bound, establish connections with estimated quantiles, and practically improve the studied strategy. Accordingly, extensive evaluations demonstrate the significance of our proposal and its scalability to multimodal large models in boosting robustness.

Tasks

Reproductions