SOTAVerified

Code Completion

Papers

Showing 151175 of 212 papers

TitleStatusHype
LLMSecEval: A Dataset of Natural Language Prompts for Security EvaluationsCode1
Exploring ChatGPT's Ability to Rank Content: A Preliminary Study on Consistency with Human Preferences0
From Copilot to Pilot: Towards AI Supported Software Development0
Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt InjectionCode4
Learning Deep Semantics for Test CompletionCode1
Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions0
Automating Code-Related Tasks Through Transformers: The Impact of Pre-trainingCode0
Execution-based Code Generation using Deep Reinforcement LearningCode1
Serenity: Library Based Python Code Analysis for Code Completion and Automated Machine Learning0
Unveiling Code Pre-Trained Models: Investigating Syntax and Semantics Capacities0
CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file ContextCode1
MultiCoder: Multi-Programming-Lingual Pre-Training for Low-Resource Code Completion0
Syntax-Aware On-the-Fly Code CompletionCode0
Multi-lingual Evaluation of Code Generation ModelsCode1
Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted ProgrammingCode1
Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango0
Don't Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion SystemsCode0
MetaTPTrans: A Meta Learning Approach for Multilingual Code Representation LearningCode1
HierarchyNet: Learning to Summarize Source Code with Heterogeneous Representations0
All You Need Is Logs: Improving Code Completion by Learning from Anonymous IDE Usage Logs0
Productivity Assessment of Neural Code CompletionCode1
ReACC: A Retrieval-Augmented Code Completion FrameworkCode1
Compilable Neural Code Generation with Compiler Feedback0
UniXcoder: Unified Cross-Modal Pre-training for Code RepresentationCode1
CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming SequencesCode1
Show:102550
← PrevPage 7 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1deepseek-coder-33b-baseAverage69.01Unverified
2deepseek-coder-6.7b-baseAverage63.4Unverified
3starcoderbaseAverage55.54Unverified
4gpt-4-1106-previewAverage53.28Unverified
5CodeLlama-13b-hfAverage52.78Unverified
6deepseek-coder-1.3b-baseAverage52.63Unverified
7CodeLlama-34b-hfAverage49.66Unverified
8CodeLlama-7b-hfAverage45Unverified
9gpt-3.5-turbo-0301Average40.86Unverified
10incoder-6BAverage33.79Unverified
#ModelMetricClaimedVerifiedStatus
1CodeGPT-adaptedAccuracy (token-level)77.13Unverified
2CodeT5+ 770MEM (line-level)37.9Unverified
3CodeT5+ 220MEM (line-level)35.17Unverified
#ModelMetricClaimedVerifiedStatus
1CodeGPT-adaptedAccuracy (token-level)75.11Unverified
2CodeT5+ 770MEM (line-level)44.86Unverified
3CodeT5+ 220MEM (line-level)43.42Unverified
#ModelMetricClaimedVerifiedStatus
1SantaCoder-MGDCompilation Rate73.03Unverified
2SantaCoderCompilation Rate59.97Unverified
3SantaCoderCompilation Rate59.79Unverified
#ModelMetricClaimedVerifiedStatus
1RamboCompilation Rate76.47Unverified
2RepoCoderCompilation Rate74.02Unverified
#ModelMetricClaimedVerifiedStatus
1RamboCompilation Rate61.7Unverified
2RepoCoderCompilation Rate58.09Unverified