GLinear
It is a simpler model; it is not made up of any complex components, functions or blocks (like self-attention schemes, positional encoding blocks, etc.). It integrates two components: (1) a non-linear GeLU-based transformation layer to capture intricate patterns, and (2) Reversible Instance Normalization (RevIN).
- Due to its simple architecture, training of this model is very fast as compared to other transformer based predictors.
- This proposed model provides comparable performance to other state-of-the-art predictors.
Papers
No papers found.
No leaderboard results yet.