| Everything Is All It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual Information Extraction | Sep 14, 2021 | AllDependency Parsing | CodeCode Available | 1 |
| Matrix Factorization for Collaborative Filtering Is Just Solving an Adjoint Latent Dirichlet Allocation Model After All | Sep 13, 2021 | AllCollaborative Filtering | CodeCode Available | 1 |
| How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding | Sep 13, 2021 | Adversarial RobustnessAll | CodeCode Available | 1 |
| Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification | Sep 12, 2021 | AllClassification | CodeCode Available | 1 |
| All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality | Sep 9, 2021 | All | CodeCode Available | 1 |
| Neural HMMs are all you need (for high-quality attention-free TTS) | Aug 30, 2021 | AllSpeech Synthesis | CodeCode Available | 1 |
| Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision | Aug 24, 2021 | AllDepth Estimation | CodeCode Available | 1 |
| Automated Identification of Cell Populations in Flow Cytometry Data with Transformers | Aug 23, 2021 | All | CodeCode Available | 1 |
| One TTS Alignment To Rule Them All | Aug 23, 2021 | AllSpeech Synthesis | CodeCode Available | 1 |
| Fastformer: Additive Attention Can Be All You Need | Aug 20, 2021 | AllNews Recommendation | CodeCode Available | 1 |