SaFeR-ToolKit: Structured Reasoning via Virtual Tool Calling for Multimodal Safety
Zixuan Xu, Tiancheng He, Huahui Yi, Kun Wang, Xi Chen, Gongli Xi, Qiankun Li, Kang Li, Yang Liu, Zhigang Zeng
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/duebassx/safer_toolkitOfficialIn paper★ 1
Abstract
Vision-language models remain susceptible to multimodal jailbreaks and over-refusal because safety hinges on both visual evidence and user intent, while many alignment pipelines supervise only the final response. To address this, we present SaFeR-ToolKit, which formalizes safety decision-making as a checkable protocol. Concretely, a planner specifies a persona, a Perception Reasoning Decision tool set, and a constrained transition graph, while a responder outputs a typed key-value tool trace before the final answer. To make the protocol reliably followed in practice, we train a single policy with a three-stage curriculum (SFT DPO GRPO), where GRPO directly supervises tool usage beyond answer-level feedback. Our contributions are two-fold: I. Dataset. The first tool-based safety reasoning dataset, comprising 31,654 examples (SFT 6k, DPO 18.6k, GRPO 6k) plus 1k held-out evaluation. II. Experiments. On Qwen2.5-VL, SaFeR-ToolKit significantly improves Safety/Helpfulness/Reasoning Rigor on 3B (29.39/45.04/4.98 84.40/71.13/78.87) and 7B (53.21/52.92/19.26 86.34/80.79/85.34), while preserving general capabilities (3B: 58.67 59.21; 7B: 66.39 66.81). Codes are available at https://github.com/Duebassx/SaFeR_ToolKit.