SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 16011625 of 10307 papers

TitleStatusHype
Incremental Sequence LearningCode0
ARL2: Aligning Retrievers for Black-box Large Language Models via Self-guided Adaptive Relevance LabelingCode0
A Framework for Few-Shot Policy Transfer through Observation Mapping and Behavior CloningCode0
Are you sure it’s an artifact? Artifact detection and uncertainty quantification in histological imagesCode0
Human-Inspired Framework to Accelerate Reinforcement LearningCode0
hULMonA: The Universal Language Model in ArabicCode0
HTR-JAND: Handwritten Text Recognition with Joint Attention Network and Knowledge DistillationCode0
Learning to Collaborate Over Graphs: A Selective Federated Multi-Task Learning ApproachCode0
Human Genome Book: Words, Sentences and ParagraphsCode0
HyperBO+: Pre-training a universal prior for Bayesian optimization with hierarchical Gaussian processesCode0
Are we done with object recognition? The iCub robot's perspectiveCode0
How Well Do Vision Transformers (VTs) Transfer To The Non-Natural Image Domain? An Empirical Study Involving Art ClassificationCode0
How transfer learning is used in generative models for image classification: improved accuracyCode0
How to evaluate word embeddings? On importance of data efficiency and simple supervised tasksCode0
How to tackle an emerging topic? Combining strong and weak labels for Covid news NERCode0
How Language-Neutral is Multilingual BERT?Code0
Accounts of using the Tustin-Net architecture on a rotary inverted pendulumCode0
How should we evaluate supervised hashing?Code0
How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination ChangeCode0
HR-VILAGE-3K3M: A Human Respiratory Viral Immunization Longitudinal Gene Expression Dataset for Systems ImmunityCode0
HOUDINI: Lifelong Learning as Program SynthesisCode0
Aff-Wild Database and AffWildNetCode0
How does Multi-Task Training Affect Transformer In-Context Capabilities? Investigations with Function ClassesCode0
A Review and Implementation of Object Detection Models and Optimizations for Real-time Medical Mask Detection during the COVID-19 PandemicCode0
Hostility Detection in Hindi leveraging Pre-Trained Language ModelsCode0
Show:102550
← PrevPage 65 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified