SOTAVerified

Program Synthesis

Program synthesis is the process of automatically generating a program or code snippet that satisfies a given specification or set of requirements. This can include generating code from a formal specification, a natural language description, or example inputs and outputs. The primary goal of program synthesis is to minimize human intervention in the coding process, reduce errors, and improve productivity.

Program synthesis often involves the use of advanced algorithms, artificial intelligence, and machine learning techniques to search the space of possible programs that meet the given constraints. This process can be guided by a variety of techniques, such as constraint solving, symbolic execution, and genetic algorithms.

Papers

Showing 125 of 423 papers

TitleStatusHype
CodeGen: An Open Large Language Model for Code with Multi-Turn Program SynthesisCode6
Gorilla: Large Language Model Connected with Massive APIsCode6
TikZero: Zero-Shot Text-Guided Graphics Program SynthesisCode5
CodeGen2: Lessons for Training LLMs on Programming and Natural LanguagesCode5
Factorio Learning EnvironmentCode4
Large Language Models Are Human-Level Prompt EngineersCode3
The Surprising Effectiveness of Test-Time Training for Few-Shot LearningCode3
ARC Prize 2024: Technical ReportCode3
Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code GenerationCode3
Comparison of Syntactic and Semantic Representations of Programs in Neural EmbeddingsCode3
MapCoder: Multi-Agent Code Generation for Competitive Problem SolvingCode2
Combining Induction and Transduction for Abstract ReasoningCode2
CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and DebuggingCode2
Searching Latent Program SpacesCode2
Parsel: Algorithmic Reasoning with Language Models by Composing DecompositionsCode2
Top Leaderboard Ranking = Top Coding Proficiency, Always? EvoEval: Evolving Coding Benchmarks via LLMCode2
InCoder: A Generative Model for Code Infilling and SynthesisCode2
CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement LearningCode2
CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance ComputingCode1
PoE-World: Compositional World Modeling with Products of Programmatic ExpertsCode1
CodeUpdateArena: Benchmarking Knowledge Editing on API UpdatesCode1
CodeScholar: Growing Idiomatic Code ExamplesCode1
CodeIt: Self-Improving Language Models with Prioritized Hindsight ReplayCode1
Analyzing the Effectiveness of Large Language Models on Text-to-SQL SynthesisCode1
Bug In the Code Stack: Can LLMs Find Bugs in Large Python Code StacksCode1
Show:102550
← PrevPage 1 of 17Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DrRepairSuccess rate @budget 10038.5Unverified
2Multiclass localizerSuccess rate @budget 10034.2Unverified
#ModelMetricClaimedVerifiedStatus
1DrRepairSuccess rate @budget 10057Unverified
2Multiclass localizerSuccess rate @budget 10053.7Unverified
#ModelMetricClaimedVerifiedStatus
1CodeTrans-MT-TF-SmallAccuracy90.31Unverified