| Only My Model On My Data: A Privacy Preserving Approach Protecting one Model and Deceiving Unauthorized Black-Box Models | Feb 14, 2024 | Adversarial AttackClassification | —Unverified | 0 |
| Privacy-Preserving Language Model Inference with Instance Obfuscation | Feb 13, 2024 | BenchmarkingLanguage Modeling | —Unverified | 0 |
| Differentially Private Distributed Inference | Feb 13, 2024 | Decision MakingPrivacy Preserving | CodeCode Available | 0 |
| Differentially Private Training of Mixture of Experts Models | Feb 11, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| RQP-SGD: Differential Private Machine Learning through Noisy SGD and Randomized Quantization | Feb 9, 2024 | Privacy PreservingQuantization | —Unverified | 0 |
| On the Convergence of Zeroth-Order Federated Tuning for Large Language Models | Feb 8, 2024 | Federated LearningGPU | —Unverified | 0 |
| Version age-based client scheduling policy for federated learning | Feb 8, 2024 | Federated LearningPrivacy Preserving | —Unverified | 0 |
| Privacy-Preserving Synthetic Continual Semantic Segmentation for Robotic Surgery | Feb 8, 2024 | Continual LearningContinual Semantic Segmentation | CodeCode Available | 0 |
| Disparate Impact on Group Accuracy of Linearization for Private Inference | Feb 6, 2024 | FairnessPrivacy Preserving | CodeCode Available | 0 |
| On the Impact of Output Perturbation on Fairness in Binary Linear Classification | Feb 5, 2024 | FairnessPrivacy Preserving | —Unverified | 0 |