SOTAVerified

Lemna: Explaining deep learning based security applications

2018-01-15Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security 2018Code Available0· sign in to hype

Wenbo Guo, Dongliang Mu5, Jun Xu4, Purui Su6, Gang Wang3, Xinyu Xing

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While deep learning has shown a great potential in various domains,the lack of transparency has limited its application in security orsafety-critical areas. Existing research has attempted to developexplanation techniques to provide interpretable explanations foreach classication decision. Unfortunately, current methods areoptimized for non-security tasks (e.g., image analysis). Their keyassumptions are often violated in security applications, leading toa poor explanation delity.In this paper, we proposeLEMNA, a high-delity explanationmethod dedicated for security applications. Given an input datasample,LEMNAgenerates a small set of interpretable features to ex-plain how the input sample is classied. The core idea is to approx-imate a local area of the complex deep learning decision boundaryusing a simple interpretable model. The local interpretable modelis specially designed to (1) handle feature dependency to betterwork with security applications (e.g., binary code analysis); and(2) handle nonlinear local boundaries to boost explanation delity.We evaluate our system using two popular deep learning applica-tions in security (a malware classier, and a function start detectorfor binary reverse-engineering). Extensive evaluations show thatLEMNA’s explanation has a much higher delity level compared toexisting methods. In addition, we demonstrate practical use casesofLEMNAto help machine learning developers to validate model be-havior, troubleshoot classication errors, and automatically patchthe errors of the target models.

Tasks

Reproductions