SOTAVerified

GLinear

It is a simpler model; it is not made up of any complex components, functions or blocks (like self-attention schemes, positional encoding blocks, etc.). It integrates two components: (1) a non-linear GeLU-based transformation layer to capture intricate patterns, and (2) Reversible Instance Normalization (RevIN).

  1. Due to its simple architecture, training of this model is very fast as compared to other transformer based predictors.
  2. This proposed model provides comparable performance to other state-of-the-art predictors.

Papers

Showing 11 of 1 papers

Show:102550

No leaderboard results yet.