SOTAVerified

Meta-Learning

Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.

( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )

Papers

Showing 12011225 of 3569 papers

TitleStatusHype
Is Bayesian Model-Agnostic Meta Learning Better than Model-Agnostic Meta Learning, Provably?Code0
Formulating Few-shot Fine-tuning Towards Language Model Pre-training: A Pilot Study on Named Entity RecognitionCode0
It HAS to be Subjective: Human Annotator Simulation via Zero-shot Density EstimationCode0
Inverse Learning with Extremely Sparse Feedback for RecommendationCode0
Interpretable Meta-Measure for Model PerformanceCode0
An Investigation of the Bias-Variance Tradeoff in Meta-GradientsCode0
Interval Bound Interpolation for Few-shot Learning with Few TasksCode0
Learning vs Retrieval: The Role of In-Context Examples in Regression with LLMsCode0
Investigating Large Language Models for Complex Word Identification in Multilingual and Multidomain SetupsCode0
Incremental Few-Shot Learning with Attention Attractor NetworksCode0
Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot Image RecognitionCode0
CMVAE: Causal Meta VAE for Unsupervised Meta-LearningCode0
Environmental Sensor Placement with Convolutional Gaussian Neural ProcessesCode0
Few-shot Generation of Personalized Neural Surrogates for Cardiac Simulation via Bayesian Meta-LearningCode0
Few-Shot Fine-Grained Action Recognition via Bidirectional Attention and Contrastive Meta-LearningCode0
CMML: Contextual Modulation Meta Learning for Cold-Start RecommendationCode0
Contextual Gradient Scaling for Few-Shot LearningCode0
Adversarial Meta-Learning of Gamma-Minimax Estimators That Leverage Prior KnowledgeCode0
INR-Arch: A Dataflow Architecture and Compiler for Arbitrary-Order Gradient Computations in Implicit Neural Representation ProcessingCode0
Improving Meta-Learning Generalization with Activation-Based Early-StoppingCode0
Contextualizing Meta-Learning via Learning to DecomposeCode0
Lifelong Domain Word Embedding via Meta-LearningCode0
Improving Meta-Continual Learning Representations with Representation ReplayCode0
In-Context Learning for MIMO Equalization Using Transformer-Based Sequence ModelsCode0
Improving Memory Efficiency for Training KANs via Meta LearningCode0
Show:102550
← PrevPage 49 of 143Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MZ+ReconMeta-train success rate97.8Unverified
2MZMeta-train success rate97.6Unverified
3MAMLMeta-test success rate36Unverified
4RL^2Meta-test success rate10Unverified
5DnCMeta-test success rate5.4Unverified
6PEARLMeta-test success rate0Unverified
#ModelMetricClaimedVerifiedStatus
1SoftModuleAverage Success Rate60Unverified
2Multi-task multi-head SACAverage Success Rate35.85Unverified
3DisCorAverage Success Rate26Unverified
4NDPAverage Success Rate11Unverified
#ModelMetricClaimedVerifiedStatus
1MZ+ReconMeta-test success rate (zero-shot)18.5Unverified
2MZMeta-test success rate (zero-shot)17.7Unverified
#ModelMetricClaimedVerifiedStatus
1Metadrop% Test Accuracy95.75Unverified