SOTAVerified

Multi-Task Learning

Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.

( Image credit: Cross-stitch Networks for Multi-task Learning )

Papers

Showing 2650 of 3687 papers

TitleStatusHype
ERNIE 2.0: A Continual Pre-training Framework for Language UnderstandingCode3
SonicVerse: Multi-Task Learning for Music Feature-Informed CaptioningCode2
Fast and Accurate Blind Flexible DockingCode2
Joint Perception and Prediction for Autonomous Driving: A SurveyCode2
Diffusion-based Visual Anagram as Multi-task LearningCode2
GeoGround: A Unified Large Vision-Language Model for Remote Sensing Visual GroundingCode2
radarODE-MTL: A Multi-Task Learning Framework with Eccentric Gradient Alignment for Robust Radar-Based ECG ReconstructionCode2
Tissue Concepts: supervised foundation models in computational pathologyCode2
LibMOON: A Gradient-based MultiObjective OptimizatioN Library in PyTorchCode2
NeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap between Language and EEG SignalsCode2
Hokoff: Real Game Dataset from Honor of Kings and its Offline Reinforcement Learning BenchmarksCode2
RouteFinder: Towards Foundation Models for Vehicle Routing ProblemsCode2
Generalization-Enhanced Code Vulnerability Detection via Multi-Task Instruction Fine-TuningCode2
Modality-agnostic Domain Generalizable Medical Image Segmentation by Multi-Frequency in Multi-Scale AttentionCode2
Unleashing the Power of Multi-Task Learning: A Comprehensive Survey Spanning Traditional, Deep, and Pretrained Foundation Model ErasCode2
OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest SearchCode2
EGTR: Extracting Graph from Transformer for Scene Graph GenerationCode2
MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task LearningCode2
Volumetric Environment Representation for Vision-Language NavigationCode2
Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-ExpertsCode2
One Train for Two Tasks: An Encrypted Traffic Classification Framework Using Supervised Contrastive LearningCode2
Delving into Multi-modal Multi-task Foundation Models for Road Scene Understanding: From Learning Paradigm PerspectivesCode2
LAA-Net: Localized Artifact Attention Network for Quality-Agnostic and Generalizable Deepfake DetectionCode2
MTLoRA: Low-Rank Adaptation Approach for Efficient Multi-Task LearningCode2
LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style PluginCode2
Show:102550
← PrevPage 2 of 148Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PCGrad∆m%125.7Unverified
2CAGrad∆m%112.8Unverified
3IMTL-G∆m%77.2Unverified
4Nash-MTL∆m%62Unverified
5BayesAgg-MTL∆m%53.7Unverified
#ModelMetricClaimedVerifiedStatus
1SwinMTLmIoU76.41Unverified
2Nash-MTLmIoU75.41Unverified
3MultiObjectiveOptimizationmIoU66.63Unverified
#ModelMetricClaimedVerifiedStatus
1SwinMTLMean IoU58.14Unverified
2Nash-MTLMean IoU40.13Unverified
#ModelMetricClaimedVerifiedStatus
1Gumbel-Matrix RoutingAverage Accuracy93.52Unverified
2Mixture-of-ExpertsAverage Accuracy92.19Unverified
#ModelMetricClaimedVerifiedStatus
1MGDA-UBError8.25Unverified
#ModelMetricClaimedVerifiedStatus
1BayesAgg-MTLdelta_m-2.23Unverified
#ModelMetricClaimedVerifiedStatus
1LETRFH83.3Unverified