SOTAVerified

Task Arithmetic

A task vector specifies a direction in the weight space of a pre-trained model, such that movement in that direction improves performance on the task. We build task vectors by subtracting the weights of a pre-trained model from the weights of the same model after fine-tuning on a task. We show that these task vectors can be modified and combined together through arithmetic operations such as negation and addition, and the behavior of the resulting model is steered accordingly.

Papers

Showing 125 of 61 papers

TitleStatusHype
Localizing Task Information for Improved Model Merging and CompressionCode2
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task ArithmeticCode2
Editing Models with Task ArithmeticCode2
Task Singular Vectors: Reducing Task Interference in Model MergingCode2
An Empirical Study of Multimodal Model MergingCode1
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained ModelsCode1
Knowledge Composition using Task Vectors with Learned Anisotropic ScalingCode1
Concrete Subspace Learning based Interference Elimination for Multi-task Model FusionCode1
Fine-Tuning Attention Modules Only: Enhancing Weight Disentanglement in Task ArithmeticCode1
Model Merging by Uncertainty-Based Gradient MatchingCode1
Merging Multi-Task Models via Weight-Ensembling Mixture of ExpertsCode1
NegMerge: Consensual Weight Negation for Strong Machine UnlearningCode1
Parameter Efficient Multi-task Model Fusion with Partial LinearizationCode1
AdaMerging: Adaptive Model Merging for Multi-Task LearningCode1
Localize-and-Stitch: Efficient Model Merging via Sparse Task ArithmeticCode1
Have You Merged My Model? On The Robustness of Large Language Model IP Protection Methods Against Model MergingCode1
Investigating Task Arithmetic for Zero-Shot Information RetrievalCode0
No Train but Gain: Language Arithmetic for training-free Language Adapters enhancementCode0
Cross-Model Transfer of Task Vectors via Few-Shot Orthogonal AlignmentCode0
CALM: Consensus-Aware Localized Merging for Multi-Task LearningCode0
Efficient Model Editing with Task Vector Bases: A Theoretical Framework and Scalable ApproachCode0
Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMsCode0
Efficient Model Editing with Task-Localized Sparse Fine-tuningCode0
Multi-Task Model Merging via Adaptive Weight DisentanglementCode0
Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge IntegrationCode0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.