Structure-Preserving Transformers for Sequences of SPD Matrices
2023-09-14Code Available1· sign in to hype
Mathieu Seraphim, Alexis Lechervy, Florian Yger, Luc Brun, Olivier Etard
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/mathieuseraphim/spdtransnetOfficialIn paperpytorch★ 36
- github.com/MathieuSeraphim/SPDTransNet_pluspytorch★ 7
Abstract
In recent years, Transformer-based auto-attention mechanisms have been successfully applied to the analysis of a variety of context-reliant data types, from texts to images and beyond, including data from non-Euclidean geometries. In this paper, we present such a mechanism, designed to classify sequences of Symmetric Positive Definite matrices while preserving their Riemannian geometry throughout the analysis. We apply our method to automatic sleep staging on timeseries of EEG-derived covariance matrices from a standard dataset, obtaining high levels of stage-wise performance.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| MASS SS3 | SPDTransNet | Macro-F1 | 0.81 | — | Unverified |