SOTAVerified

Memorization

Papers

Showing 5175 of 1088 papers

TitleStatusHype
DISC: Learning From Noisy Labels via Dynamic Instance-Specific Selection and CorrectionCode1
Do We Need Zero Training Loss After Achieving Zero Training Error?Code1
Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised LearningCode1
Early-Learning Regularization Prevents Memorization of Noisy LabelsCode1
Advancing Cross-domain Discriminability in Continual Learning of Vision-Language ModelsCode1
An Empirical Study of Memorization in NLPCode1
Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language ModelsCode1
Elephants Never Forget: Testing Language Models for Memorization of Tabular DataCode1
AutomaTikZ: Text-Guided Synthesis of Scientific Vector Graphics with TikZCode1
Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confidence EstimationCode1
Exploring Memorization in Adversarial TrainingCode1
Antipodes of Label Differential Privacy: PATE and ALIBICode1
Deciphering the Factors Influencing the Efficacy of Chain-of-Thought: Probability, Memorization, and Noisy ReasoningCode1
DAT: Training Deep Networks Robust To Label-Noise by Matching the Feature DistributionsCode1
Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language ModelsCode1
A comparison of LSTM and GRU networks for learning symbolic sequencesCode1
Data Unlearning in Diffusion ModelsCode1
Mitigating Memorization of Noisy Labels via Regularization between RepresentationsCode1
C-SFDA: A Curriculum Learning Aided Self-Training Framework for Efficient Source Free Domain AdaptationCode1
Cousins Of The Vendi Score: A Family Of Similarity-Based Diversity Metrics For Science And Machine LearningCode1
DASH: Warm-Starting Neural Network Training in Stationary Settings without Loss of PlasticityCode1
AlleNoise: large-scale text classification benchmark dataset with real-world label noiseCode1
Copyright Traps for Large Language ModelsCode1
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy LabelsCode1
Data Contamination Can Cross Language BarriersCode1
Show:102550
← PrevPage 3 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PaLM-540B (few-shot, k=5)Accuracy95.4Unverified
2Gopher-280B (few-shot, k=5)Accuracy80Unverified
3PaLM-62B (few-shot, k=5)Accuracy77.7Unverified