SOTAVerified

Code Completion

Papers

Showing 151200 of 212 papers

TitleStatusHype
Long-Range Modeling of Source Code Files with eWASH: Extended Window Access by Syntax Hierarchy0
M2rc-Eval: Massively Multilingual Repository-level Code Completion Evaluation0
MarsCode Agent: AI-native Automated Bug Fixing0
OMPGPT: A Generative Pre-trained Transformer Model for OpenMP0
On Explaining (Large) Language Models For Code Using Global Code-Based Explanations0
Past as a Guide: Leveraging Retrospective Learning for Python Code Completion0
Plan for Speed -- Dilated Scheduling for Masked Diffusion Language Models0
Procedural Memory Is Not All You Need: Bridging Cognitive Gaps in LLM-Based Agents0
Protect Your Secrets: Understanding and Measuring Data Exposure in VSCode Extensions0
R2C2-Coder: Enhancing and Benchmarking Real-world Repository-level Code Completion Abilities of Code Large Language Models0
Repoformer: Selective Retrieval for Repository-Level Code Completion0
REPOFUSE: Repository-Level Code Completion with Fused Dual Context0
RepoFusion: Training Code Models to Understand Your Repository0
RepoMasterEval: Evaluating Code Completion via Real-World Repositories0
Rethinking Software Engineering in the Foundation Model Era: From Task-Driven AI Copilots to Goal-Driven AI Pair Programmers0
Retrieval-augmented code completion for local projects using large language models0
RTLRepoCoder: Repository-Level RTL Code Completion through the Combination of Fine-Tuning and Retrieval Augmentation0
SecureFalcon: Are We There Yet in Automated Software Vulnerability Detection with LLMs?0
Sequence Model Design for Code Completion in the Modern IDE0
Serenity: Library Based Python Code Analysis for Code Completion and Automated Machine Learning0
Stable Code Technical Report0
Statically Contextualizing Large Language Models with Typed Holes0
Structure-Aware Corpus Construction and User-Perception-Aligned Metrics for Large-Language-Model Code Completion0
TPIA: Towards Target-specific Prompt Injection Attack against Code-oriented Large Language Models0
TASTY: A Transformer based Approach to Space and Time complexity0
Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango0
Token Alignment via Character Matching for Subword Completion0
Toward Less Hidden Cost of Code Completion with Acceptance and Ranking Models0
Towards Full-line Code Completion with Neural Language Models0
User-Interactive Machine Learning Model for Identifying Structural Relationships of Code Features0
What Do They Capture? -- A Structural Analysis of Pre-Trained Language Models for Source Code0
TaskEval: Assessing Difficulty of Code Generation Tasks for Large Language Models0
Eclipse CDT code analysis and unit testingCode0
ObscuraCoder: Powering Efficient Code LM Pre-Training Via Obfuscation GroundingCode0
A Transformer-Based Approach for Smart Invocation of Automatic Code CompletionCode0
CodeT5+: Open Code Large Language Models for Code Understanding and GenerationCode0
Neural Software AnalysisCode0
Open Vocabulary Learning on Source Code with a Graph-Structured CacheCode0
On the Embeddings of Variables in Recurrent Neural Networks for Source CodeCode0
CodeMark: Imperceptible Watermarking for Code Datasets against Neural Code Completion ModelsCode0
MERGE: Fast Private Text GenerationCode0
Traces of Memorisation in Large Language Models for CodeCode0
Breaking the Silence: the Threats of Using LLMs in Software EngineeringCode0
Don't Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion SystemsCode0
Time-Efficient Code Completion Model for the R Programming LanguageCode0
Pythia: AI-assisted Code Completion SystemCode0
CodeKGC: Code Language Model for Generative Knowledge Graph ConstructionCode0
Learning to Execute Programs with Instruction Pointer Attention Graph Neural NetworksCode0
Large Language Models of Code Fail at Completing Code with Potential BugsCode0
Investigating the Performance of Language Models for Completing Code in Functional Programming Languages: a Haskell Case StudyCode0
Show:102550
← PrevPage 4 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1deepseek-coder-33b-baseAverage69.01Unverified
2deepseek-coder-6.7b-baseAverage63.4Unverified
3starcoderbaseAverage55.54Unverified
4gpt-4-1106-previewAverage53.28Unverified
5CodeLlama-13b-hfAverage52.78Unverified
6deepseek-coder-1.3b-baseAverage52.63Unverified
7CodeLlama-34b-hfAverage49.66Unverified
8CodeLlama-7b-hfAverage45Unverified
9gpt-3.5-turbo-0301Average40.86Unverified
10incoder-6BAverage33.79Unverified
#ModelMetricClaimedVerifiedStatus
1CodeGPT-adaptedAccuracy (token-level)77.13Unverified
2CodeT5+ 770MEM (line-level)37.9Unverified
3CodeT5+ 220MEM (line-level)35.17Unverified
#ModelMetricClaimedVerifiedStatus
1CodeGPT-adaptedAccuracy (token-level)75.11Unverified
2CodeT5+ 770MEM (line-level)44.86Unverified
3CodeT5+ 220MEM (line-level)43.42Unverified
#ModelMetricClaimedVerifiedStatus
1SantaCoder-MGDCompilation Rate73.03Unverified
2SantaCoderCompilation Rate59.97Unverified
3SantaCoderCompilation Rate59.79Unverified
#ModelMetricClaimedVerifiedStatus
1RamboCompilation Rate76.47Unverified
2RepoCoderCompilation Rate74.02Unverified
#ModelMetricClaimedVerifiedStatus
1RamboCompilation Rate61.7Unverified
2RepoCoderCompilation Rate58.09Unverified