SOTAVerified

HumanEval

Papers

Showing 101125 of 264 papers

TitleStatusHype
Planning In Natural Language Improves LLM Search For Code GenerationCode1
SemCoder: Training Code Language Models with Comprehensive Semantics ReasoningCode1
Concept Distillation from Strong to Weak Models via Hypotheses-to-Theories Prompting0
Benchmarking AI Models in Software Engineering: A Review, Search Tool, and Enhancement Protocol0
Addressing Data Leakage in HumanEval Using Combinatorial Test Design0
Inference-Aware Fine-Tuning for Best-of-N Sampling in Large Language Models0
BASS: Batched Attention-optimized Speculative Sampling0
Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities0
CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models0
Large Language Model Guided Self-Debugging Code Generation0
Guideline Forest: Experience-Induced Multi-Guideline Reasoning with Stepwise Aggregation0
AutoTest: Evolutionary Code Solution Selection with Test Cases0
An LLM-as-Judge Metric for Bridging the Gap with Human Evaluation in SE Tasks0
Layer-Aware Task Arithmetic: Disentangling Task-Specific and Instruction-Following Knowledge0
Learning How To Ask: Cycle-Consistency Refines Prompts in Multimodal Foundation Models0
Guided Code Generation with LLMs: A Multi-Agent Framework for Complex Code Tasks0
Guaranteed Guess: A Language Modeling Approach for CISC-to-RISC Transpilation with Testing Guarantees0
GRIN: GRadient-INformed MoE0
Grammar-Based Code Representation: Is It a Worthy Pursuit for LLMs?0
CodeShell Technical Report0
Code-Optimise: Self-Generated Preference Data for Correctness and Efficiency0
G-Designer: Architecting Multi-agent Communication Topologies via Graph Neural Networks0
CodeMixBench: Evaluating Large Language Models on Code Generation with Code-Mixed Prompts0
InfiFusion: A Unified Framework for Enhanced Cross-Model Reasoning via LLM Fusion0
Adaptive Dense Reward: Understanding the Gap Between Action and Reward Space in Alignment0
Show:102550
← PrevPage 5 of 11Next →

No leaderboard results yet.