| Harmonic (Quantum) Neural Networks | Dec 14, 2022 | Inductive BiasQuantum Machine Learning | —Unverified | 0 |
| Learning threshold neurons via the "edge of stability" | Dec 14, 2022 | Inductive Bias | —Unverified | 0 |
| OAMixer: Object-aware Mixing Layer for Vision Transformers | Dec 13, 2022 | Inductive BiasObject | CodeCode Available | 0 |
| Simplicity Bias Leads to Amplified Performance Disparities | Dec 13, 2022 | FairnessInductive Bias | —Unverified | 0 |
| Masked autoencoders are effective solution to transformer data-hungry | Dec 12, 2022 | Contrastive LearningInductive Bias | CodeCode Available | 1 |
| Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition | Dec 11, 2022 | Facial Expression RecognitionFacial Expression Recognition (FER) | CodeCode Available | 1 |
| Relate to Predict: Towards Task-Independent Knowledge Representations for Reinforcement Learning | Dec 10, 2022 | Inductive BiasObject | —Unverified | 0 |
| General-Purpose In-Context Learning by Meta-Learning Transformers | Dec 8, 2022 | In-Context LearningInductive Bias | CodeCode Available | 0 |
| A K-variate Time Series Is Worth K Words: Evolution of the Vanilla Transformer Architecture for Long-term Multivariate Time Series Forecasting | Dec 6, 2022 | DecoderInductive Bias | —Unverified | 0 |
| Recognizing Object by Components with Human Prior Knowledge Enhances Adversarial Robustness of Deep Neural Networks | Dec 4, 2022 | Adversarial RobustnessInductive Bias | CodeCode Available | 0 |