SOTAVerified

Subject-Aware Contrastive Learning for Biosignals

2020-06-30Code Available1· sign in to hype

Joseph Y. Cheng, Hanlin Goh, Kaan Dogrusoz, Oncel Tuzel, Erdrin Azemi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Datasets for biosignals, such as electroencephalogram (EEG) and electrocardiogram (ECG), often have noisy labels and have limited number of subjects (<100). To handle these challenges, we propose a self-supervised approach based on contrastive learning to model biosignals with a reduced reliance on labeled data and with fewer subjects. In this regime of limited labels and subjects, intersubject variability negatively impacts model performance. Thus, we introduce subject-aware learning through (1) a subject-specific contrastive loss, and (2) an adversarial training to promote subject-invariance during the self-supervised learning. We also develop a number of time-series data augmentation techniques to be used with the contrastive loss for biosignals. Our method is evaluated on publicly available datasets of two different biosignals with different tasks: EEG decoding and ECG anomaly detection. The embeddings learned using self-supervision yield competitive classification results compared to entirely supervised methods. We show that subject-invariance improves representation quality for these tasks, and observe that subject-specific loss increases performance when fine-tuning with supervised labels.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
EEG Motor Movement/Imagery DatasetSSLAccuracy0.89Unverified
EEG Motor Movement/Imagery DatasetSubject-invariant SSL Embedding & Linear ClassifierAccuracy0.73Unverified
EEG Motor Movement/Imagery DatasetSubject-specific SSLAccuracy0.68Unverified

Reproductions