| How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives | May 24, 2023 | Knowledge DistillationQNLI | CodeCode Available | 1 |
| Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning | May 21, 2023 | Abstract Meaning RepresentationContrastive Learning | CodeCode Available | 1 |
| EnCBP: A New Benchmark Dataset for Finer-Grained Cultural Background Prediction in English | Mar 28, 2022 | Cultural Vocal Bursts Intensity PredictionLanguage Modeling | —Unverified | 0 |
| Few-shot Multimodal Multitask Multilingual Learning | Feb 19, 2023 | Few-Shot LearningIn-Context Learning | —Unverified | 0 |
| Two-in-One: A Model Hijacking Attack Against Text Generation Models | May 12, 2023 | ClassificationFace Recognition | —Unverified | 0 |
| DAWSON: Data Augmentation using Weak Supervision On Natural Language | Nov 16, 2021 | Data AugmentationLanguage Modeling | —Unverified | 0 |
| On the Importance of Local Information in Transformer Based Models | Aug 13, 2020 | de-enMRPC | —Unverified | 0 |
| Privacy-preserving Fine-tuning of Large Language Models through Flatness | Mar 7, 2024 | Knowledge DistillationPrivacy Preserving | —Unverified | 0 |
| Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT | Jul 14, 2023 | QNLIQQP | —Unverified | 0 |
| An Automatic and Efficient BERT Pruning for Edge AI Systems | Jun 21, 2022 | CPUModel Compression | —Unverified | 0 |