| MATE: Multi-view Attention for Table Transformer Efficiency | Sep 9, 2021 | Inductive BiasQuestion Answering | —Unverified | 0 |
| Power to the Relational Inductive Bias: Graph Neural Networks in Electrical Power Grids | Sep 8, 2021 | Inductive BiasInductive Learning | —Unverified | 0 |
| nnFormer: Interleaved Transformer for Volumetric Segmentation | Sep 7, 2021 | Image SegmentationInductive Bias | CodeCode Available | 1 |
| Few-shot Learning via Dependency Maximization and Instance Discriminant Analysis | Sep 7, 2021 | Few-Shot LearningInductive Bias | —Unverified | 0 |
| Learning Hierarchical Structures with Differentiable Nondeterministic Stacks | Sep 5, 2021 | Inductive BiasLanguage Modeling | CodeCode Available | 1 |
| Rethinking Deep Image Prior for Denoising | Aug 29, 2021 | DenoisingInductive Bias | CodeCode Available | 1 |
| Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation | Aug 27, 2021 | Inductive BiasPlaying the Game of 2048 | CodeCode Available | 2 |
| PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers | Aug 19, 2021 | DecoderInductive Bias | CodeCode Available | 1 |
| Learning Type Annotation: Is Big Data Enough? | Aug 18, 2021 | Inductive Bias | CodeCode Available | 0 |
| ConvNets vs. Transformers: Whose Visual Representations are More Transferable? | Aug 11, 2021 | ClassificationDepth Estimation | —Unverified | 0 |