SOTAVerified

Meta-Learning

Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.

( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )

Papers

Showing 13011350 of 3569 papers

TitleStatusHype
LabelCraft: Empowering Short Video Recommendations with Automated Label CraftingCode0
Meta-Learning the Difference: Preparing Large Language Models for Efficient AdaptationCode0
Latent-Optimized Adversarial Neural Transfer for Sarcasm DetectionCode0
A new benchmark for group distribution shifts in hand grasp regression for object manipulation. Can meta-learning raise the bar?Code0
Knowledge Distillation with Reptile Meta-Learning for Pretrained Language Model CompressionCode0
Chameleon: Learning Model Initializations Across Tasks With Different SchemasCode0
Joint Optimization of Class-Specific Training- and Test-Time Data Augmentation in SegmentationCode0
Knowledge-enhanced Relation Graph and Task Sampling for Few-shot Molecular Property PredictionCode0
Cross-Domain Few-Shot Graph ClassificationCode0
Latent Representation Learning of Multi-scale Thermophysics: Application to Dynamics in Shocked Porous Energetic MaterialCode0
It HAS to be Subjective: Human Annotator Simulation via Zero-shot Density EstimationCode0
Inverse Learning with Extremely Sparse Feedback for RecommendationCode0
Investigating Large Language Models for Complex Word Identification in Multilingual and Multidomain SetupsCode0
Interpretable Meta-Measure for Model PerformanceCode0
Few-Shot Learning with Localization in Realistic SettingsCode0
Interval Bound Interpolation for Few-shot Learning with Few TasksCode0
Is Bayesian Model-Agnostic Meta Learning Better than Model-Agnostic Meta Learning, Provably?Code0
Hacking Task Confounder in Meta-LearningCode0
Joint inference and input optimization in equilibrium networksCode0
Latent Task-Specific Graph Network SimulatorsCode0
Capability-Aware Shared Hypernetworks for Flexible Heterogeneous Multi-Robot CoordinationCode0
Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot Image RecognitionCode0
Centroids Matching: an efficient Continual Learning approach operating in the embedding spaceCode0
Meta-Learning with Context-Agnostic InitialisationsCode0
Incremental Few-Shot Learning with Attention Attractor NetworksCode0
Are Few-Shot Learning Benchmarks too Simple ? Solving them without Task Supervision at Test-TimeCode0
A Neural-Symbolic Architecture for Inverse Graphics Improved by Lifelong Meta-LearningCode0
Fast Unsupervised Deep Outlier Model Selection with HypernetworksCode0
PACIA: Parameter-Efficient Adapter for Few-Shot Molecular Property PredictionCode0
Incorporating Test-Time Optimization into Training with Dual Networks for Human Mesh RecoveryCode0
In-Context Learning for MIMO Equalization Using Transformer-Based Sequence ModelsCode0
CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive LearningCode0
Hierarchically Structured Meta-learningCode0
Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-AlignmentCode0
In-Context Learning through the Bayesian PrismCode0
Meta-Learning with Variational BayesCode0
High-order structure preserving graph neural network for few-shot learningCode0
Meta-Learning with Versatile Loss Geometries for Fast Adaptation Using Mirror DescentCode0
Improving Generalization in Meta-Learning via Meta-Gradient AugmentationCode0
Associative Alignment for Few-shot Image ClassificationCode0
MetaLR: Meta-tuning of Learning Rates for Transfer Learning in Medical ImagingCode0
Improving Memory Efficiency for Training KANs via Meta LearningCode0
Improving Federated Learning Personalization via Model Agnostic Meta LearningCode0
When Low Resource NLP Meets Unsupervised Language Model: Meta-pretraining Then Meta-learning for Few-shot Text ClassificationCode0
Improving Meta-Continual Learning Representations with Representation ReplayCode0
Improving Both Domain Robustness and Domain Adaptability in Machine TranslationCode0
Fast Meta-Learning for Adaptive Hierarchical Classifier DesignCode0
Improve Meta-learning for Few-Shot Text Classification with All You Can Acquire from the TasksCode0
Improving Arabic Multi-Label Emotion Classification using Stacked Embeddings and Hybrid Loss FunctionCode0
Improving Meta-Learning Generalization with Activation-Based Early-StoppingCode0
Show:102550
← PrevPage 27 of 72Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MZ+ReconMeta-train success rate97.8Unverified
2MZMeta-train success rate97.6Unverified
3MAMLMeta-test success rate36Unverified
4RL^2Meta-test success rate10Unverified
5DnCMeta-test success rate5.4Unverified
6PEARLMeta-test success rate0Unverified
#ModelMetricClaimedVerifiedStatus
1SoftModuleAverage Success Rate60Unverified
2Multi-task multi-head SACAverage Success Rate35.85Unverified
3DisCorAverage Success Rate26Unverified
4NDPAverage Success Rate11Unverified
#ModelMetricClaimedVerifiedStatus
1MZ+ReconMeta-test success rate (zero-shot)18.5Unverified
2MZMeta-test success rate (zero-shot)17.7Unverified
#ModelMetricClaimedVerifiedStatus
1Metadrop% Test Accuracy95.75Unverified