Sensivity of LLMs' Explanations to the Training Randomness:Context, Class & Task Dependencies
2026-03-09Unverified0· sign in to hype
Romain Loncour, Jérémie Bogaert, François-Xavier Standaert
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Transformer models are now a cornerstone in natural language processing. Yet, explaining their decisions remains a challenge. It was shown recently that the same model trained on the same data with a different randomness can lead to very different explanations. In this paper, we investigate how the (syntactic) context, the classes to be learned and the tasks influence this explanations' sensitivity to randomness. We show that they all have statistically significant impact: smallest for the (syntactic) context, medium for the classes and largest for the tasks.