SOTAVerified

Transformers Implement Functional Gradient Descent to Learn Non-Linear Functions In Context

2023-12-11Unverified0· sign in to hype

Xiang Cheng, Yuxin Chen, Suvrit Sra

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Many neural network architectures are known to be Turing Complete, and can thus, in principle implement arbitrary algorithms. However, Transformers are unique in that they can implement gradient-based learning algorithms under simple parameter configurations. This paper provides theoretical and empirical evidence that (non-linear) Transformers naturally learn to implement gradient descent in function space, which in turn enable them to learn non-linear functions in context. Our results apply to a broad class of combinations of non-linear architectures and non-linear in-context learning tasks. Additionally, we show that the optimal choice of non-linear activation depends in a natural way on the class of functions that need to be learned.

Tasks

Reproductions