| Skip a Layer or Loop it? Test-Time Depth Adaptation of Pretrained LLMs | Jul 10, 2025 | CoLALarge Language Model | —Unverified | 0 |
| LoRA-Mixer: Coordinate Modular LoRA Experts Through Serial Attention Routing | Jun 17, 2025 | ARCCoLA | —Unverified | 0 |
| CoLA: Collaborative Low-Rank Adaptation | May 21, 2025 | CoLAMixture-of-Experts | CodeCode Available | 0 |
| CoLa -- Learning to Interactively Collaborate with Large LMs | Apr 3, 2025 | CoLAText Generation | —Unverified | 0 |
| Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study | Apr 3, 2025 | CoLADenoising | CodeCode Available | 0 |
| Catastrophic Forgetting in LLMs: A Comparative Analysis Across Language Tasks | Apr 1, 2025 | CoLAContinual Learning | —Unverified | 0 |
| Controlling Large Language Model with Latent Actions | Mar 27, 2025 | CoLALanguage Modeling | CodeCode Available | 0 |
| CoCo-CoLa: Evaluating and Improving Language Adherence in Multilingual LLMs | Feb 18, 2025 | CoLA | —Unverified | 0 |
| CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation | Feb 16, 2025 | CoLA | CodeCode Available | 1 |
| Optimizing Language Models for Grammatical Acceptability: A Comparative Study of Fine-Tuning Techniques | Jan 14, 2025 | CoLAComputational Efficiency | —Unverified | 0 |