SOTAVerified

TriGuard: Testing Model Safety with Attribution Entropy, Verification, and Drift

2025-06-17Code Available0· sign in to hype

Dipesh Tharu Mahato, Rohan Poudel, Pramod Dhungana

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Deep neural networks often achieve high accuracy, but ensuring their reliability under adversarial and distributional shifts remains a pressing challenge. We propose TriGuard, a unified safety evaluation framework that combines (1) formal robustness verification, (2) attribution entropy to quantify saliency concentration, and (3) a novel Attribution Drift Score measuring explanation stability. TriGuard reveals critical mismatches between model accuracy and interpretability: verified models can still exhibit unstable reasoning, and attribution-based signals provide complementary safety insights beyond adversarial accuracy. Extensive experiments across three datasets and five architectures show how TriGuard uncovers subtle fragilities in neural reasoning. We further demonstrate that entropy-regularized training reduces explanation drift without sacrificing performance. TriGuard advances the frontier in robust, interpretable model evaluation.

Reproductions