SOTAVerified

Code Completion

Papers

Showing 126150 of 212 papers

TitleStatusHype
Full Line Code Completion: Bringing AI to Desktop0
Does Your Neural Code Completion Model Use My Code? A Membership Inference Approach0
Rethinking Software Engineering in the Foundation Model Era: From Task-Driven AI Copilots to Goal-Driven AI Pair Programmers0
Is Next Token Prediction Sufficient for GPT? Exploration on Code Logic Comprehension0
Stable Code Technical Report0
Investigating the Performance of Language Models for Completing Code in Functional Programming Languages: a Haskell Case StudyCode0
Repoformer: Selective Retrieval for Repository-Level Code Completion0
Token Alignment via Character Matching for Subword Completion0
Insights from the Usage of the Ansible Lightspeed Code Completion Service0
REPOFUSE: Repository-Level Code Completion with Fused Dual Context0
Context Composing for Full Line Code Completion0
Neural Models for Source Code Synthesis and Completion0
Do Large Code Models Understand Programming Concepts? Counterfactual Analysis for Code Predicates0
Enhancing LLM-Based Coding Tools through Native Integration of IDE-Derived Static Context0
OMPGPT: A Generative Pre-trained Transformer Model for OpenMP0
When Neural Code Completion Models Size up the Situation: Attaining Cheaper and Faster Completion through Dynamic Model InferenceCode0
Traces of Memorisation in Large Language Models for CodeCode0
A Review of Repository Level Prompting for LLMs0
Breaking the Silence: the Threats of Using LLMs in Software EngineeringCode0
INSPECT: Intrinsic and Systematic Probing Evaluation for Code TransformersCode0
Interpretability Illusions in the Generalization of Simplified Models0
GenCodeSearchNet: A Benchmark Test Suite for Evaluating Generalization in Programming Language UnderstandingCode0
Past as a Guide: Leveraging Retrospective Learning for Python Code Completion0
Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications0
Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation0
Show:102550
← PrevPage 6 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1deepseek-coder-33b-baseAverage69.01Unverified
2deepseek-coder-6.7b-baseAverage63.4Unverified
3starcoderbaseAverage55.54Unverified
4gpt-4-1106-previewAverage53.28Unverified
5CodeLlama-13b-hfAverage52.78Unverified
6deepseek-coder-1.3b-baseAverage52.63Unverified
7CodeLlama-34b-hfAverage49.66Unverified
8CodeLlama-7b-hfAverage45Unverified
9gpt-3.5-turbo-0301Average40.86Unverified
10incoder-6BAverage33.79Unverified
#ModelMetricClaimedVerifiedStatus
1CodeGPT-adaptedAccuracy (token-level)77.13Unverified
2CodeT5+ 770MEM (line-level)37.9Unverified
3CodeT5+ 220MEM (line-level)35.17Unverified
#ModelMetricClaimedVerifiedStatus
1CodeGPT-adaptedAccuracy (token-level)75.11Unverified
2CodeT5+ 770MEM (line-level)44.86Unverified
3CodeT5+ 220MEM (line-level)43.42Unverified
#ModelMetricClaimedVerifiedStatus
1SantaCoder-MGDCompilation Rate73.03Unverified
2SantaCoderCompilation Rate59.97Unverified
3SantaCoderCompilation Rate59.79Unverified
#ModelMetricClaimedVerifiedStatus
1RamboCompilation Rate76.47Unverified
2RepoCoderCompilation Rate74.02Unverified
#ModelMetricClaimedVerifiedStatus
1RamboCompilation Rate61.7Unverified
2RepoCoderCompilation Rate58.09Unverified