| TaskEval: Assessing Difficulty of Code Generation Tasks for Large Language Models | Jul 30, 2024 | BenchmarkingCode Completion | —Unverified | 0 |
| SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths | May 30, 2024 | GSM8KHumanEval | —Unverified | 0 |
| Stochastic Code Generation | Apr 14, 2023 | Code GenerationDecoder | —Unverified | 0 |
| Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for Energy Efficiency, Output Accuracy, and Inference Latency | Apr 4, 2025 | BenchmarkingGSM8K | —Unverified | 0 |
| SwiftEval: Developing a Language-Specific Benchmark for LLM-generated Code Evaluation | May 30, 2025 | Code GenerationHumanEval | —Unverified | 0 |
| Synthesize, Partition, then Adapt: Eliciting Diverse Samples from Foundation Models | Nov 11, 2024 | Code GenerationHumanEval | —Unverified | 0 |
| Test-Driven Development for Code Generation | Feb 21, 2024 | Code GenerationHumanEval | —Unverified | 0 |
| Textbooks Are All You Need | Jun 20, 2023 | AllCode Generation | —Unverified | 0 |
| The Art of Repair: Optimizing Iterative Program Repair with Instruction-Tuned Models | May 5, 2025 | HumanEvalProgram Repair | —Unverified | 0 |
| The Program Testing Ability of Large Language Models for Code | Oct 9, 2023 | HumanEvalmbpp | —Unverified | 0 |