SOTAVerified

Meta-Learning

Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.

( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )

Papers

Showing 19261950 of 3569 papers

TitleStatusHype
Meta-Learning Framework for End-to-End Imposter Identification in Unseen Speaker Recognition0
Specialization in Hierarchical Learning Systems0
SPeCiaL: Self-Supervised Pretraining for Continual Learning0
SPEC: Summary Preference Decomposition for Low-Resource Abstractive Summarization0
Speech-driven Facial Animation using Cascaded GANs for Learning of Motion and Texture0
Spiking Generative Adversarial Networks With a Neural Network Discriminator: Local Training, Bayesian Models, and Continual Meta-Learning0
Spoofing-Aware Speaker Verification Robust Against Domain and Channel Mismatches0
SPOT: Sequential Predictive Modeling of Clinical Trial Outcome with Meta-Learning0
SQ-Swin: a Pretrained Siamese Quadratic Swin Transformer for Lettuce Browning Prediction0
Squeezing Lemons with Hammers: An Evaluation of AutoML and Tabular Deep Learning for Data-Scarce Classification Applications0
Squeezing nnU-Nets with Knowledge Distillation for On-Board Cloud Detection0
SSMT: Few-Shot Traffic Forecasting with Single Source Meta-Transfer0
ST^2: Small-data Text Style Transfer via Multi-task Meta-Learning0
Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms0
Stabilized In-Context Learning with Pre-trained Language Models for Few Shot Dialogue State Tracking0
Stacked Generalization for Medical Concept Extraction from Clinical Notes0
Stacked unsupervised learning with a network architecture found by supervised meta-learning0
Stackelberg Meta-Learning Based Control for Guided Cooperative LQG Systems0
Stacking and stability0
Stacking for Probabilistic Short-term Load Forecasting0
Stacking hybrid GARCH models for forecasting Bitcoin volatility0
STDA-Meta: A Meta-Learning Framework for Few-Shot Traffic Prediction0
ST-MAML: A Stochastic-Task based Method for Task-Heterogeneous Meta-Learning0
Stochastic Compositional Minimax Optimization with Provable Convergence Guarantees0
Decentralized Multi-Level Compositional Optimization Algorithms with Level-Independent Convergence Rate0
Show:102550
← PrevPage 78 of 143Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MZ+ReconMeta-train success rate97.8Unverified
2MZMeta-train success rate97.6Unverified
3MAMLMeta-test success rate36Unverified
4RL^2Meta-test success rate10Unverified
5DnCMeta-test success rate5.4Unverified
6PEARLMeta-test success rate0Unverified
#ModelMetricClaimedVerifiedStatus
1SoftModuleAverage Success Rate60Unverified
2Multi-task multi-head SACAverage Success Rate35.85Unverified
3DisCorAverage Success Rate26Unverified
4NDPAverage Success Rate11Unverified
#ModelMetricClaimedVerifiedStatus
1MZ+ReconMeta-test success rate (zero-shot)18.5Unverified
2MZMeta-test success rate (zero-shot)17.7Unverified
#ModelMetricClaimedVerifiedStatus
1Metadrop% Test Accuracy95.75Unverified