SOTAVerified

Meta-Learning

Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.

( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )

Papers

Showing 251275 of 3569 papers

TitleStatusHype
Learn From Model Beyond Fine-Tuning: A SurveyCode1
Control-oriented meta-learningCode1
An Analysis of the Adaptation Speed of Causal ModelsCode1
Learning Compositional Rules via Neural Program SynthesisCode1
Contrastive Meta-Learning for Partially Observable Few-Shot LearningCode1
Contrastive Meta Learning with Behavior Multiplicity for RecommendationCode1
Copolymer Informatics with Multi-Task Deep Neural NetworksCode1
Context-Aware Meta-LearningCode1
Consolidated learning -- a domain-specific model-free optimization strategy with examples for XGBoost and MIMIC-IVCode1
Continued Pretraining for Better Zero- and Few-Shot PromptabilityCode1
Concrete Subspace Learning based Interference Elimination for Multi-task Model FusionCode1
Chameleon: A Data-Efficient Generalist for Dense Visual Prediction in the WildCode1
Consistency-guided Meta-Learning for Bootstrapping Semi-Supervised Medical Image SegmentationCode1
Continuous Optical Zooming: A Benchmark for Arbitrary-Scale Image Super-Resolution in Real WorldCode1
Covariate Distribution Aware Meta-learningCode1
BOML: A Modularized Bilevel Optimization Library in Python for Meta LearningCode1
BOME! Bilevel Optimization Made Easy: A Simple First-Order ApproachCode1
Boosting Few-Shot Classification with View-Learnable Contrastive LearningCode1
BlackGoose Rimer: Harnessing RWKV-7 as a Simple yet Superior Replacement for Transformers in Large-Scale Time Series ModelingCode1
Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot LearningCode1
Can Learned Optimization Make Reinforcement Learning Less Difficult?Code1
CD-FSOD: A Benchmark for Cross-domain Few-shot Object DetectionCode1
Blind Super-Resolution via Meta-learning and Markov Chain Monte Carlo SimulationCode1
Concept Learners for Few-Shot LearningCode1
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective AdaptationCode1
Show:102550
← PrevPage 11 of 143Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MZ+ReconMeta-train success rate97.8Unverified
2MZMeta-train success rate97.6Unverified
3MAMLMeta-test success rate36Unverified
4RL^2Meta-test success rate10Unverified
5DnCMeta-test success rate5.4Unverified
6PEARLMeta-test success rate0Unverified
#ModelMetricClaimedVerifiedStatus
1SoftModuleAverage Success Rate60Unverified
2Multi-task multi-head SACAverage Success Rate35.85Unverified
3DisCorAverage Success Rate26Unverified
4NDPAverage Success Rate11Unverified
#ModelMetricClaimedVerifiedStatus
1MZ+ReconMeta-test success rate (zero-shot)18.5Unverified
2MZMeta-test success rate (zero-shot)17.7Unverified
#ModelMetricClaimedVerifiedStatus
1Metadrop% Test Accuracy95.75Unverified