SOTAVerified

Insights from Benchmarking Frontier Language Models on Web App Code Generation

2024-09-08Code Available1· sign in to hype

Yi Cui

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper presents insights from evaluating 16 frontier large language models (LLMs) on the WebApp1K benchmark, a test suite designed to assess the ability of LLMs to generate web application code. The results reveal that while all models possess similar underlying knowledge, their performance is differentiated by the frequency of mistakes they make. By analyzing lines of code (LOC) and failure distributions, we find that writing correct code is more complex than generating incorrect code. Furthermore, prompt engineering shows limited efficacy in reducing errors beyond specific cases. These findings suggest that further advancements in coding LLM should emphasize on model reliability and mistake minimization.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
WebApp1K-Reactgpt-4o-2024-08-06pass@10.89Unverified
WebApp1K-Reactclaude-3.5-sonnetpass@10.88Unverified
WebApp1K-Reactmistral-large-2pass@10.78Unverified
WebApp1K-Reactdeepseek-coder-v2-instructpass@10.7Unverified
WebApp1K-Reactllama-v3p1-405b-instructpass@10.3Unverified

Reproductions