SOTAVerified

Memorization

Papers

Showing 176200 of 1088 papers

TitleStatusHype
DAT: Training Deep Networks Robust To Label-Noise by Matching the Feature DistributionsCode1
Antipodes of Label Differential Privacy: PATE and ALIBICode1
Exploring Memorization in Adversarial TrainingCode1
Dissecting Generation Modes for Abstractive Summarization Models via Ablation and AttributionCode1
Learning to Generate Novel Scene Compositions from Single Images and VideosCode1
Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy LabelsCode1
Generating Novel Scene Compositions from Single Images and VideosCode1
Driving Style Representation in Convolutional Recurrent Neural Network Model of Driver IdentificationCode1
Hyperspectral Image Super-Resolution with Spectral Mixup and Heterogeneous DatasetsCode1
Grounding Consistency: Distilling Spatial Common Sense for Precise Visual Relationship DetectionCode1
Multi-Objective Interpolation Training for Robustness to Label NoiseCode1
SuperLoss: A Generic Loss for Robust Curriculum LearningCode1
Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic ForgettingCode1
Learning from Context or Names? An Empirical Study on Neural Relation ExtractionCode1
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence EstimationCode1
Question and Answer Test-Train Overlap in Open-Domain Question Answering DatasetsCode1
Jointly Non-Sampling Learning for Knowledge Graph Enhanced RecommendationCode1
Early-Learning Regularization Prevents Memorization of Noisy LabelsCode1
Are Pretrained Language Models Symbolic Reasoners Over Knowledge?Code1
Eicient Non-Sampling Factorization Machines for Optimal Context-Aware RecommendationCode1
Zero-Shot Compositional Policy Learning via Language GroundingCode1
Few-Shot Single-View 3-D Object Reconstruction with Compositional PriorsCode1
State-of-the-Art Augmented NLP Transformer models for direct and single-step retrosynthesisCode1
Do We Need Zero Training Loss After Achieving Zero Training Error?Code1
Improving Generalization by Controlling Label-Noise Information in Neural Network WeightsCode1
Show:102550
← PrevPage 8 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PaLM-540B (few-shot, k=5)Accuracy95.4Unverified
2Gopher-280B (few-shot, k=5)Accuracy80Unverified
3PaLM-62B (few-shot, k=5)Accuracy77.7Unverified