| How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives | May 24, 2023 | Knowledge DistillationQNLI | CodeCode Available | 1 |
| Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning | May 21, 2023 | Abstract Meaning RepresentationContrastive Learning | CodeCode Available | 1 |
| Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study | Apr 3, 2025 | CoLADenoising | CodeCode Available | 0 |
| Privacy-preserving Fine-tuning of Large Language Models through Flatness | Mar 7, 2024 | Knowledge DistillationPrivacy Preserving | —Unverified | 0 |
| Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge | Feb 29, 2024 | QNLISST-2 | CodeCode Available | 0 |
| NewsQs: Multi-Source Question Generation for the Inquiring Mind | Feb 28, 2024 | ArticlesDocument Summarization | —Unverified | 0 |
| Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT | Jul 14, 2023 | QNLIQQP | —Unverified | 0 |
| Meta-training with Demonstration Retrieval for Efficient Few-shot Learning | Jun 30, 2023 | Few-Shot LearningGPU | —Unverified | 0 |
| Two-in-One: A Model Hijacking Attack Against Text Generation Models | May 12, 2023 | ClassificationFace Recognition | —Unverified | 0 |
| Few-shot Multimodal Multitask Multilingual Learning | Feb 19, 2023 | Few-Shot LearningIn-Context Learning | —Unverified | 0 |