SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 99269950 of 10307 papers

TitleStatusHype
ODM3D: Alleviating Foreground Sparsity for Semi-Supervised Monocular 3D Object DetectionCode0
OG-SGG: Ontology-Guided Scene Graph Generation. A Case Study in Transfer Learning for Telepresence RoboticsCode0
On Architectures for Including Visual Information in Neural Language Models for Image DescriptionCode0
On Causal and Anticausal LearningCode0
On Characterizing the Evolution of Embedding Space of Neural Networks using Algebraic TopologyCode0
On Constrained Spectral Clustering and Its ApplicationsCode0
One Deep Music Representation to Rule Them All? : A comparative analysis of different representation learning strategiesCode0
One for Dozens: Adaptive REcommendation for All Domains with Counterfactual AugmentationCode0
One Self-Configurable Model to Solve Many Abstract Visual Reasoning ProblemsCode0
One-Shot Segmentation of Novel White Matter Tracts via Extensive Data AugmentationCode0
One-shot Transfer Learning for Population MappingCode0
On Generalizing Detection Models for Unconstrained EnvironmentsCode0
On Inductive Biases for Machine Learning in Data Constrained SettingsCode0
Online Knowledge Distillation with Diverse PeersCode0
Online Multi-level Contrastive Representation Distillation for Cross-Subject fNIRS Emotion RecognitionCode0
On statistic alignment for domain adaptation in structural health monitoringCode0
On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext TaskCode0
On The Cross-Modal Transfer from Natural Language to Code through Adapter ModulesCode0
On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision TransformersCode0
On the Effectiveness of Supervision in Asymmetric Non-Contrastive LearningCode0
On the Effectiveness of Vision Transformers for Zero-shot Face Anti-SpoofingCode0
On the Generalizability of Foundation Models for Crop Type MappingCode0
On the Generalization vs Fidelity Paradox in Knowledge DistillationCode0
On the Generation of Medical Dialogues for COVID-19Code0
On the importance of cross-task features for class-incremental learningCode0
Show:102550
← PrevPage 398 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified