SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 16011650 of 10307 papers

TitleStatusHype
Improved Training for 3D Point Cloud ClassificationCode0
Improving 3D Medical Image Segmentation at Boundary Regions using Local Self-attention and Global Volume MixingCode0
Transferring Robustness for Graph Neural Network Against Poisoning AttacksCode0
A Framework for Supervised Heterogeneous Transfer Learning using Dynamic Distribution Adaptation and Manifold RegularizationCode0
ArmanEmo: A Persian Dataset for Text-based Emotion DetectionCode0
Improved Multilingual Language Model Pretraining for Social Media Text via Translation Pair PredictionCode0
ARL2: Aligning Retrievers for Black-box Large Language Models via Self-guided Adaptive Relevance LabelingCode0
A Framework for Few-Shot Policy Transfer through Observation Mapping and Behavior CloningCode0
Are you sure it’s an artifact? Artifact detection and uncertainty quantification in histological imagesCode0
ImageNot: A contrast with ImageNet preserves model rankingsCode0
Image Understands Point Cloud: Weakly Supervised 3D Semantic Segmentation via Association LearningCode0
IIITT@DravidianLangTech-EACL2021: Transfer Learning for Offensive Language Detection in Dravidian LanguagesCode0
Identifying Misinformation on YouTube through Transcript Contextual Analysis with Transformer ModelsCode0
Learning to Collaborate Over Graphs: A Selective Federated Multi-Task Learning ApproachCode0
Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained ModelsCode0
Image-based eeg classification of brain responses to song recordingsCode0
Are we done with object recognition? The iCub robot's perspectiveCode0
Tuned Compositional Feature Replays for Efficient Stream LearningCode0
ICICLE: Interpretable Class Incremental Continual LearningCode0
Accounts of using the Tustin-Net architecture on a rotary inverted pendulumCode0
Hyperspectral Classification Based on 3D Asymmetric Inception Network with Data Fusion Transfer LearningCode0
Identification of head impact locations, speeds, and force based on head kinematicsCode0
Imitation Learning for Generalizable Self-driving Policy with Sim-to-real TransferCode0
Aff-Wild Database and AffWildNetCode0
HyperBO+: Pre-training a universal prior for Bayesian optimization with hierarchical Gaussian processesCode0
A Review and Implementation of Object Detection Models and Optimizations for Real-time Medical Mask Detection during the COVID-19 PandemicCode0
Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual GeneralizationCode0
A Resource-Efficient Training Framework for Remote Sensing Text--Image RetrievalCode0
Human Genome Book: Words, Sentences and ParagraphsCode0
Human-Inspired Framework to Accelerate Reinforcement LearningCode0
Hyperparameters in Score-Based Membership Inference AttacksCode0
A Regularization-based Transfer Learning Method for Information Extraction via Instructed Graph DecoderCode0
HTR-JAND: Handwritten Text Recognition with Joint Attention Network and Knowledge DistillationCode0
Are ECGs enough? Deep learning classification of cardiac anomalies using only electrocardiogramsCode0
HR-VILAGE-3K3M: A Human Respiratory Viral Immunization Longitudinal Gene Expression Dataset for Systems ImmunityCode0
hULMonA: The Universal Language Model in ArabicCode0
How Well Do Vision Transformers (VTs) Transfer To The Non-Natural Image Domain? An Empirical Study Involving Art ClassificationCode0
Hyperpolyglot LLMs: Cross-Lingual Interpretability in Token EmbeddingsCode0
Implicit Cross-Lingual Rewarding for Efficient Multilingual Preference AlignmentCode0
How should we evaluate supervised hashing?Code0
How to evaluate word embeddings? On importance of data efficiency and simple supervised tasksCode0
Arabic Dialect Identification Using BERT Fine-TuningCode0
How Language-Neutral is Multilingual BERT?Code0
How to tackle an emerging topic? Combining strong and weak labels for Covid news NERCode0
Training-Free Acceleration of ViTs with Delayed Spatial MergingCode0
How good are variational autoencoders at transfer learning?Code0
Accelerating Transfer Learning with Near-Data Computation on Cloud Object StoresCode0
How does Multi-Task Training Affect Transformer In-Context Capabilities? Investigations with Function ClassesCode0
How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination ChangeCode0
Aesthetic Attributes Assessment of ImagesCode0
Show:102550
← PrevPage 33 of 207Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified