SOTAVerified

[Re] Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention

2020-12-06RCCode Available0· sign in to hype

Maja Schneider, Marco Körner

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Scope of Reproducibility The evaluated paper presents a method to classify crop types from multispectral satellite image time series with a newly developed pixel-set encoder and an adaption of the Transformer [2], called temporal attention encoder. Methodology In order to assess both the architecture and the performance of the approach, we first attempted to implement the method from scratch, followed by a study of the authorsʼ openly provided code. Additionally, we also compiled an alternative dataset similar to the one presented in the paper and evaluated the methodology on it. Results During the study, we were not able to reproduce the method due to a conceptual misinterpretation of ours regarding the authorsʼ adaption of the Transformer [2]. However, the publicly available implementation helped us answering our questions and proved its validity during our experiments on different datasets. Additionally, we compared the papersʼ temporal attention encoder to our adaption of it, which we came across while we were trying to reimplement and grasp the authorsʼ ideas. What was easy Running the provided code and obtaining the presented dataset turned out to be easily possible. Even adapting the method to our own ideas did not cause issues, due to a well documented and clear implementation. What was difficult Reimplementing the approach from scratch turned out to be harder than expected, especially because we had a certain type of architecture in mind that did not fit the dimensions of the layers mentioned in the paper. Furthermore, knowing how the dataset was exactly assembled would have been beneficial for us, as we tried to retrace these steps, and therefore would have made the results on our dataset easier to compare to the ones from the paper. Communication with original authors While working on the challenge, we stood in E-mail contact with the first and second author, had two online meetings and got feedback to our implementation on GITHUB. Additionally, one of the authors of the Transformer paper [2] provided us with further answers regarding their modelsʼ architecture.

Tasks

Reproductions