SOTAVerified

Program Synthesis

Program synthesis is the process of automatically generating a program or code snippet that satisfies a given specification or set of requirements. This can include generating code from a formal specification, a natural language description, or example inputs and outputs. The primary goal of program synthesis is to minimize human intervention in the coding process, reduce errors, and improve productivity.

Program synthesis often involves the use of advanced algorithms, artificial intelligence, and machine learning techniques to search the space of possible programs that meet the given constraints. This process can be guided by a variety of techniques, such as constraint solving, symbolic execution, and genetic algorithms.

Papers

Showing 150 of 423 papers

TitleStatusHype
Gorilla: Large Language Model Connected with Massive APIsCode6
CodeGen: An Open Large Language Model for Code with Multi-Turn Program SynthesisCode6
TikZero: Zero-Shot Text-Guided Graphics Program SynthesisCode5
CodeGen2: Lessons for Training LLMs on Programming and Natural LanguagesCode5
Factorio Learning EnvironmentCode4
ARC Prize 2024: Technical ReportCode3
The Surprising Effectiveness of Test-Time Training for Few-Shot LearningCode3
Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code GenerationCode3
Large Language Models Are Human-Level Prompt EngineersCode3
Comparison of Syntactic and Semantic Representations of Programs in Neural EmbeddingsCode3
CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and DebuggingCode2
Searching Latent Program SpacesCode2
Combining Induction and Transduction for Abstract ReasoningCode2
MapCoder: Multi-Agent Code Generation for Competitive Problem SolvingCode2
Top Leaderboard Ranking = Top Coding Proficiency, Always? EvoEval: Evolving Coding Benchmarks via LLMCode2
Parsel: Algorithmic Reasoning with Language Models by Composing DecompositionsCode2
CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement LearningCode2
InCoder: A Generative Model for Code Infilling and SynthesisCode2
CLEVER: A Curated Benchmark for Formally Verified Code GenerationCode1
PoE-World: Compositional World Modeling with Products of Programmatic ExpertsCode1
Rewriting Pre-Training Data Boosts LLM Performance in Math and CodeCode1
OSVBench: Benchmarking LLMs on Specification Generation Tasks for Operating System VerificationCode1
TinyverseGP: Towards a Modular Cross-domain Benchmarking Framework for Genetic ProgrammingCode1
AutoIOT: LLM-Driven Automated Natural Language Programming for AIoT ApplicationsCode1
GPIoT: Tailoring Small Language Models for IoT Program Synthesis and DevelopmentCode1
Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and ObjectsCode1
MA-RLHF: Reinforcement Learning from Human Feedback with Macro ActionsCode1
AutoSafeCoder: A Multi-Agent Framework for Securing LLM Code Generation through Static Analysis and Fuzz TestingCode1
H-ARC: A Robust Estimate of Human Performance on the Abstraction and Reasoning Corpus BenchmarkCode1
Procedural Synthesis of Synthesizable MoleculesCode1
CodeUpdateArena: Benchmarking Knowledge Editing on API UpdatesCode1
Bug In the Code Stack: Can LLMs Find Bugs in Large Python Code StacksCode1
How Efficient is LLM-Generated Code? A Rigorous & High-Standard BenchmarkCode1
Generating Code World Models with Large Language Models Guided by Monte Carlo Tree SearchCode1
Goals as Reward-Producing ProgramsCode1
Constrained Decoding for Fill-in-the-Middle Code Language Models via Efficient Left and Right Quotienting of Context-Sensitive GrammarsCode1
Pix2Code: Learning to Compose Neural Visual Concepts as ProgramsCode1
CodeIt: Self-Improving Language Models with Prioritized Hindsight ReplayCode1
Opening the AI black box: program synthesis via mechanistic interpretabilityCode1
ReGAL: Refactoring Programs to Discover Generalizable AbstractionsCode1
Analyzing the Effectiveness of Large Language Models on Text-to-SQL SynthesisCode1
CodeScholar: Growing Idiomatic Code ExamplesCode1
Automating the Design of Multigrid Methods with Evolutionary Program SynthesisCode1
KEN: Kernel Extensions using Natural LanguageCode1
Bring Your Own KG: Self-Supervised Program Synthesis for Zero-Shot KGQACode1
LILO: Learning Interpretable Libraries by Compressing and Documenting CodeCode1
Enhancing Network Management Using Code Generated by Large Language ModelsCode1
RLTF: Reinforcement Learning from Unit Test FeedbackCode1
LambdaBeam: Neural Program Search with Higher-Order Functions and LambdasCode1
ANPL: Towards Natural Programming with Interactive DecompositionCode1
Show:102550
← PrevPage 1 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DrRepairSuccess rate @budget 10038.5Unverified
2Multiclass localizerSuccess rate @budget 10034.2Unverified
#ModelMetricClaimedVerifiedStatus
1DrRepairSuccess rate @budget 10057Unverified
2Multiclass localizerSuccess rate @budget 10053.7Unverified
#ModelMetricClaimedVerifiedStatus
1CodeTrans-MT-TF-SmallAccuracy90.31Unverified