| Swallowing the Poison Pills: Insights from Vulnerability Disparity Among LLMs | Feb 23, 2025 | Data PoisoningDiagnostic | —Unverified | 0 |
| Interrogating LLM design under a fair learning doctrine | Feb 22, 2025 | Memorization | —Unverified | 0 |
| Generative AI Training and Copyright Law | Feb 21, 2025 | Memorization | —Unverified | 0 |
| CopyJudge: Automated Copyright Infringement Identification and Mitigation in Text-to-Image Diffusion Models | Feb 21, 2025 | Memorization | —Unverified | 0 |
| Privacy Ripple Effects from Adding or Removing Personal Information in Language Model Training | Feb 21, 2025 | Language ModelingLanguage Modelling | CodeCode Available | 0 |
| LIFT: Improving Long Context Understanding of Large Language Models through Long Input Fine-Tuning | Feb 20, 2025 | In-Context LearningLong-Context Understanding | —Unverified | 0 |
| Obliviate: Efficient Unmemorization for Protecting Intellectual Property in Large Language Models | Feb 20, 2025 | HellaSwagMemorization | —Unverified | 0 |
| Quantifying Memorization and Retriever Performance in Retrieval-Augmented Vision-Language Models | Feb 19, 2025 | MemorizationQuestion Answering | —Unverified | 0 |
| Pruning as a Defense: Reducing Memorization in Large Language Models | Feb 18, 2025 | Memorization | —Unverified | 0 |
| None of the Others: a General Technique to Distinguish Reasoning from Memorization in Multiple-Choice LLM Evaluation Benchmarks | Feb 18, 2025 | MathMemorization | —Unverified | 0 |