| Predicting Attention Sparsity in Transformers | Sep 24, 2021 | DecoderLanguage Modeling | —Unverified | 0 |
| MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection | Sep 16, 2021 | AttributeLanguage Modeling | CodeCode Available | 0 |
| SupCL-Seq: Supervised Contrastive Learning for Downstream Optimized Sequence Representations | Sep 15, 2021 | CoLAContrastive Learning | CodeCode Available | 1 |
| CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation | Sep 13, 2021 | DecoderDenoising | CodeCode Available | 1 |
| Data Efficient Masked Language Modeling for Vision and Language | Sep 5, 2021 | Language ModelingLanguage Modelling | CodeCode Available | 1 |
| Frustratingly Simple Pretraining Alternatives to Masked Language Modeling | Sep 4, 2021 | Language ModelingLanguage Modelling | CodeCode Available | 1 |
| Split-and-Rephrase in a Cross-Lingual Manner: A Complete Pipeline | Sep 1, 2021 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Domain-Specific Japanese ELECTRA Model Using a Small Corpus | Sep 1, 2021 | ArticlesComputational Efficiency | —Unverified | 0 |
| CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations | Sep 1, 2021 | Emotion ClassificationLanguage Modeling | CodeCode Available | 1 |
| Sentence Bottleneck Autoencoders from Transformer Language Models | Aug 31, 2021 | DecoderDenoising | CodeCode Available | 1 |