SOTAVerified

Meta-Learning

Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.

( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )

Papers

Showing 76100 of 3569 papers

TitleStatusHype
Can Learned Optimization Make Reinforcement Learning Less Difficult?Code1
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective AdaptationCode1
CD-FSOD: A Benchmark for Cross-domain Few-shot Object DetectionCode1
BOME! Bilevel Optimization Made Easy: A Simple First-Order ApproachCode1
AMAGO: Scalable In-Context Reinforcement Learning for Adaptive AgentsCode1
BOML: A Modularized Bilevel Optimization Library in Python for Meta LearningCode1
Chameleon: A Data-Efficient Generalist for Dense Visual Prediction in the WildCode1
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startCode1
Beyond the Prototype: Divide-and-conquer Proxies for Few-shot SegmentationCode1
Bitwidth-Adaptive Quantization-Aware Neural Network Training: A Meta-Learning ApproachCode1
Adaptive-Control-Oriented Meta-Learning for Nonlinear SystemsCode1
A Closer Look at Few-Shot Video Classification: A New Baseline and BenchmarkCode1
A Channel Coding Benchmark for Meta-LearningCode1
A Meta-Learning Approach for Training Explainable Graph Neural NetworksCode1
A Meta-Learning Approach for Graph Representation Learning in Multi-Task SettingsCode1
Boosting Few-Shot Classification with View-Learnable Contrastive LearningCode1
Adaptive FSS: A Novel Few-Shot Segmentation Framework via Prototype EnhancementCode1
CAMeL: Cross-modality Adaptive Meta-Learning for Text-based Person RetrievalCode1
Camera Distortion-aware 3D Human Pose Estimation in Video with Optimization-based Meta-LearningCode1
Bayesian Model-Agnostic Meta-LearningCode1
Concrete Subspace Learning based Interference Elimination for Multi-task Model FusionCode1
An Analysis of the Adaptation Speed of Causal ModelsCode1
Amortized Probabilistic Conditioning for Optimization, Simulation and InferenceCode1
Continued Pretraining for Better Zero- and Few-Shot PromptabilityCode1
BlackGoose Rimer: Harnessing RWKV-7 as a Simple yet Superior Replacement for Transformers in Large-Scale Time Series ModelingCode1
Show:102550
← PrevPage 4 of 143Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MZ+ReconMeta-train success rate97.8Unverified
2MZMeta-train success rate97.6Unverified
3MAMLMeta-test success rate36Unverified
4RL^2Meta-test success rate10Unverified
5DnCMeta-test success rate5.4Unverified
6PEARLMeta-test success rate0Unverified
#ModelMetricClaimedVerifiedStatus
1SoftModuleAverage Success Rate60Unverified
2Multi-task multi-head SACAverage Success Rate35.85Unverified
3DisCorAverage Success Rate26Unverified
4NDPAverage Success Rate11Unverified
#ModelMetricClaimedVerifiedStatus
1MZ+ReconMeta-test success rate (zero-shot)18.5Unverified
2MZMeta-test success rate (zero-shot)17.7Unverified
#ModelMetricClaimedVerifiedStatus
1Metadrop% Test Accuracy95.75Unverified