SOTAVerified

Multi-Task Learning

Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.

( Image credit: Cross-stitch Networks for Multi-task Learning )

Papers

Showing 526550 of 3687 papers

TitleStatusHype
MTDT: A Multi-Task Deep Learning Digital Twin0
MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-ExpertsCode3
Multi-task Learning-based Joint CSI Prediction and Predictive Transmitter Selection for Security0
Unleashing the Power of Multi-Task Learning: A Comprehensive Survey Spanning Traditional, Deep, and Pretrained Foundation Model ErasCode2
Open-Set Video-based Facial Expression Recognition with Human Expression-sensitive Prompting0
OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest SearchCode2
Mixed Supervised Graph Contrastive Learning for Recommendation0
EEGEncoder: Advancing BCI with Transformer-Based Motor Imagery Classification0
Narrative Action Evaluation with Prompt-Guided Multimodal InteractionCode1
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of ExpertsCode3
Machine Learning-Assisted Thermoelectric Cooling for On-Demand Multi-Hotspot Thermal Management0
Text-dependent Speaker Verification (TdSV) Challenge 2024: Challenge Evaluation Plan0
An Offline Reinforcement Learning Algorithm Customized for Multi-Task Fusion in Large-Scale Recommender Systems0
A Point-Based Approach to Efficient LiDAR Multi-Task Perception0
TIMIT Speaker Profiling: A Comparison of Multi-task learning and Single-task learning Approaches0
Physical formula enhanced multi-task learning for pharmacokinetics prediction0
MOWA: Multiple-in-One Image Warping Model0
Integrating knowledge bases to improve coreference and bridging resolution for the chemical domain0
Negation Triplet Extraction with Syntactic Dependency and Semantic ConsistencyCode0
DKE-Research at SemEval-2024 Task 2: Incorporating Data Augmentation with Generative Models and Biomedical Knowledge to Enhance Inference Robustness0
Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning StrategiesCode0
MING-MOE: Enhancing Medical Multi-Task Learning in Large Language Models with Sparse Mixture of Low-Rank Adapter ExpertsCode5
Enhancing Fairness and Performance in Machine Learning Models: A Multi-Task Learning Approach with Monte-Carlo Dropout and Pareto Optimality0
FedAuxHMTL: Federated Auxiliary Hard-Parameter Sharing Multi-Task Learning for Network Edge Traffic Classification0
Structure-aware Fine-tuning for Code Pre-trained Models0
Show:102550
← PrevPage 22 of 148Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PCGrad∆m%125.7Unverified
2CAGrad∆m%112.8Unverified
3IMTL-G∆m%77.2Unverified
4Nash-MTL∆m%62Unverified
5BayesAgg-MTL∆m%53.7Unverified
#ModelMetricClaimedVerifiedStatus
1SwinMTLmIoU76.41Unverified
2Nash-MTLmIoU75.41Unverified
3MultiObjectiveOptimizationmIoU66.63Unverified
#ModelMetricClaimedVerifiedStatus
1SwinMTLMean IoU58.14Unverified
2Nash-MTLMean IoU40.13Unverified
#ModelMetricClaimedVerifiedStatus
1Gumbel-Matrix RoutingAverage Accuracy93.52Unverified
2Mixture-of-ExpertsAverage Accuracy92.19Unverified
#ModelMetricClaimedVerifiedStatus
1MGDA-UBError8.25Unverified
#ModelMetricClaimedVerifiedStatus
1BayesAgg-MTLdelta_m-2.23Unverified
#ModelMetricClaimedVerifiedStatus
1LETRFH83.3Unverified