SOTAVerified

Improving Transformers using Faithful Positional Encoding

2024-05-15Unverified0· sign in to hype

Tsuyoshi Idé, Jokin Labaien, Pin-Yu Chen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose a new positional encoding method for a neural network architecture called the Transformer. Unlike the standard sinusoidal positional encoding, our approach is based on solid mathematical grounds and has a guarantee of not losing information about the positional order of the input sequence. We show that the new encoding approach systematically improves the prediction performance in the time-series classification task.

Tasks

Reproductions