Domain-aware Self-supervised Pre-training for Weakly-supervised Meme Analysis
Anonymous
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Existing self-supervised learning strategies are constrained to a limited set of trivial and generic downstream tasks that predominantly target uni-modal applications. This has isolated progress for imperative multi-modal applications that are diverse in terms of complexity and domain-affinity such as meme analysis. Here, we introduce two self-supervised pre-training strategies, namely Ext-PIE-Net and MM-SimCLR that (i) employ multi-modal hate-speech data during pre-training, and (ii) extend existing self-supervision learning approaches by incorporating multiple specialized pretext tasks; effectively catering to the required complex multi-modal representation learning for meme analysis. We experiment with different self-supervision strategies, including potential variants that could help learn rich cross-modality representations and evaluate using popular linear probing on the Hateful Memes task. The proposed solutions strongly compete with the fully-supervised baseline in a weakly-supervised setting, while distinctly outperforming them on all three tasks of the Memotion challenge with 0.18\%, 23.64\%, and 0.93\% performance gain, respectively. Further we demonstrate generalizability of the proposed solutions by reporting competitive performance on the HarMeme task. Finally, we empirically establish efficient convergence of the proposed solutions during fine-tuning, in a weak-supervision setup by arguing that the complexity of the self-supervision strategy and downstream task at hand are correlated. Our efforts highlight the requirement of better self-supervision strategies involving specialised pretext tasks for efficient fine-tuning and generalizable performance.