SOTAVerified

Memorization

Papers

Showing 151200 of 1088 papers

TitleStatusHype
SELC: Self-Ensemble Label Correction Improves Learning with Noisy LabelsCode1
Self-Guided Learning to Denoise for Robust RecommendationCode1
UNICON: Combating Label Noise Through Uniform Selection and Contrastive LearningCode1
An Empirical Study of Memorization in NLPCode1
Euler State Networks: Non-dissipative Reservoir ComputingCode1
Do Language Models Plagiarize?Code1
On Learning Contrastive Representations for Learning with Noisy LabelsCode1
Membership Inference Attacks and Defenses in Neural Network PruningCode1
Towards Adversarial Evaluations for Inexact Machine UnlearningCode1
Reasoning Through Memorization: Nearest Neighbor Knowledge Graph EmbeddingsCode1
Reconstructing Training Data with Informed AdversariesCode1
Learning With Twin Noisy Labels for Visible-Infrared Person Re-IdentificationCode1
Evidentiality-guided Generation for Knowledge-Intensive NLP TasksCode1
Quantifying Adaptability in Pre-trained Language Models with 500 TasksCode1
Learning with Noisy Correspondence for Cross-modal MatchingCode1
Personalized Federated Learning through Local MemorizationCode1
Learning with Noisy Labels Revisited: A Study Using Real-World Human AnnotationsCode1
Mitigating Memorization of Noisy Labels via Regularization between RepresentationsCode1
Adaptive Early-Learning Correction for Segmentation from Noisy AnnotationsCode1
Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-LearningCode1
Learning Transferable Parameters for Unsupervised Domain AdaptationCode1
Consensual Collaborative Training And Knowledge Distillation Based Facial Expression Recognition Under Noisy AnnotationsCode1
A comparison of LSTM and GRU networks for learning symbolic sequencesCode1
Understanding and Improving Early Stopping for Learning with Noisy LabelsCode1
Graph Convolutional Memory using Topological PriorsCode1
DAT: Training Deep Networks Robust To Label-Noise by Matching the Feature DistributionsCode1
Antipodes of Label Differential Privacy: PATE and ALIBICode1
Exploring Memorization in Adversarial TrainingCode1
Dissecting Generation Modes for Abstractive Summarization Models via Ablation and AttributionCode1
Learning to Generate Novel Scene Compositions from Single Images and VideosCode1
Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy LabelsCode1
Generating Novel Scene Compositions from Single Images and VideosCode1
Driving Style Representation in Convolutional Recurrent Neural Network Model of Driver IdentificationCode1
Hyperspectral Image Super-Resolution with Spectral Mixup and Heterogeneous DatasetsCode1
Grounding Consistency: Distilling Spatial Common Sense for Precise Visual Relationship DetectionCode1
Multi-Objective Interpolation Training for Robustness to Label NoiseCode1
SuperLoss: A Generic Loss for Robust Curriculum LearningCode1
Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic ForgettingCode1
Learning from Context or Names? An Empirical Study on Neural Relation ExtractionCode1
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence EstimationCode1
Question and Answer Test-Train Overlap in Open-Domain Question Answering DatasetsCode1
Jointly Non-Sampling Learning for Knowledge Graph Enhanced RecommendationCode1
Early-Learning Regularization Prevents Memorization of Noisy LabelsCode1
Are Pretrained Language Models Symbolic Reasoners Over Knowledge?Code1
Eicient Non-Sampling Factorization Machines for Optimal Context-Aware RecommendationCode1
Zero-Shot Compositional Policy Learning via Language GroundingCode1
Few-Shot Single-View 3-D Object Reconstruction with Compositional PriorsCode1
State-of-the-Art Augmented NLP Transformer models for direct and single-step retrosynthesisCode1
Do We Need Zero Training Loss After Achieving Zero Training Error?Code1
Improving Generalization by Controlling Label-Noise Information in Neural Network WeightsCode1
Show:102550
← PrevPage 4 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PaLM-540B (few-shot, k=5)Accuracy95.4Unverified
2Gopher-280B (few-shot, k=5)Accuracy80Unverified
3PaLM-62B (few-shot, k=5)Accuracy77.7Unverified