SOTAVerified

Task Arithmetic

A task vector specifies a direction in the weight space of a pre-trained model, such that movement in that direction improves performance on the task. We build task vectors by subtracting the weights of a pre-trained model from the weights of the same model after fine-tuning on a task. We show that these task vectors can be modified and combined together through arithmetic operations such as negation and addition, and the behavior of the resulting model is steered accordingly.

Papers

Showing 125 of 61 papers

TitleStatusHype
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task ArithmeticCode2
Task Singular Vectors: Reducing Task Interference in Model MergingCode2
Localizing Task Information for Improved Model Merging and CompressionCode2
Editing Models with Task ArithmeticCode2
Fine-Tuning Attention Modules Only: Enhancing Weight Disentanglement in Task ArithmeticCode1
Model Merging by Uncertainty-Based Gradient MatchingCode1
Merging Multi-Task Models via Weight-Ensembling Mixture of ExpertsCode1
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained ModelsCode1
An Empirical Study of Multimodal Model MergingCode1
Knowledge Composition using Task Vectors with Learned Anisotropic ScalingCode1
Localize-and-Stitch: Efficient Model Merging via Sparse Task ArithmeticCode1
Have You Merged My Model? On The Robustness of Large Language Model IP Protection Methods Against Model MergingCode1
Concrete Subspace Learning based Interference Elimination for Multi-task Model FusionCode1
NegMerge: Consensual Weight Negation for Strong Machine UnlearningCode1
AdaMerging: Adaptive Model Merging for Multi-Task LearningCode1
Parameter Efficient Multi-task Model Fusion with Partial LinearizationCode1
DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic0
Bias Vector: Mitigating Biases in Language Models with Task Arithmetic Approach0
Disentangling Task Interference within Neurons: Model Merging in Alignment with Neuronal Mechanisms0
CultureMERT: Continual Pre-Training for Cross-Cultural Music Representation Learning0
Beyond Task Vectors: Selective Task Arithmetic Based on Importance Metrics0
HPE-CogVLM: Advancing Vision Language Models with a Head Pose Grounding Task0
BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors0
FedRPCA: Enhancing Federated LoRA Aggregation Using Robust PCA0
MCU: Improving Machine Unlearning through Mode Connectivity0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.