SOTAVerified

Task Arithmetic

A task vector specifies a direction in the weight space of a pre-trained model, such that movement in that direction improves performance on the task. We build task vectors by subtracting the weights of a pre-trained model from the weights of the same model after fine-tuning on a task. We show that these task vectors can be modified and combined together through arithmetic operations such as negation and addition, and the behavior of the resulting model is steered accordingly.

Papers

Showing 125 of 61 papers

TitleStatusHype
Transferring Visual Explainability of Self-Explaining Models through Task Arithmetic0
DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic0
CultureMERT: Continual Pre-Training for Cross-Cultural Music Representation Learning0
Subspace-Boosted Model Merging0
CALM: Consensus-Aware Localized Merging for Multi-Task LearningCode0
FedRPCA: Enhancing Federated LoRA Aggregation Using Robust PCA0
On Fairness of Task Arithmetic: The Role of Task Vectors0
Scalable Strategies for Continual Learning with Replay0
Cross-Model Transfer of Task Vectors via Few-Shot Orthogonal AlignmentCode0
MCU: Improving Machine Unlearning through Mode Connectivity0
CAT Merging: A Training-Free Approach for Resolving Conflicts in Model Merging0
Investigating Task Arithmetic for Zero-Shot Information RetrievalCode0
Single-Input Multi-Output Model Merging: Leveraging Foundation Models for Dense Multi-Task Learning0
When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers0
Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMsCode0
Efficient Model Editing with Task-Localized Sparse Fine-tuningCode0
OpenThaiGPT 1.6 and R1: Thai-Centric Open Source and Reasoning Large Language Models0
Disentangling Task Interference within Neurons: Model Merging in Alignment with Neuronal Mechanisms0
Layer-Aware Task Arithmetic: Disentangling Task-Specific and Instruction-Following Knowledge0
Neural Networks Remember More: The Power of Parameter Isolation and Combination0
Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and Uncertainty Based Routing0
Efficient Model Editing with Task Vector Bases: A Theoretical Framework and Scalable ApproachCode0
Task Arithmetic in Trust Region: A Training-Free Model Merging Approach to Navigate Knowledge Conflicts0
Soup to go: mitigating forgetting during continual learning with model averaging0
BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.