SOTAVerified

Provably Secure Generative Linguistic Steganography

2021-06-03Findings (ACL) 2021Code Available1· sign in to hype

Siyu Zhang, Zhongliang Yang, Jinshuai Yang, Yongfeng Huang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generative linguistic steganography mainly utilized language models and applied steganographic sampling (stegosampling) to generate high-security steganographic text (stegotext). However, previous methods generally lead to statistical differences between the conditional probability distributions of stegotext and natural text, which brings about security risks. In this paper, to further ensure security, we present a novel provably secure generative linguistic steganographic method ADG, which recursively embeds secret information by Adaptive Dynamic Grouping of tokens according to their probability given by an off-the-shelf language model. We not only prove the security of ADG mathematically, but also conduct extensive experiments on three public corpora to further verify its imperceptibility. The experimental results reveal that the proposed method is able to generate stegotext with nearly perfect security.

Tasks

Reproductions