SOTAVerified

PECC: Problem Extraction and Coding Challenges

2024-04-29Code Available1· sign in to hype

Patrick Haller, Jonas Golde, Alan Akbik

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent advancements in large language models (LLMs) have showcased their exceptional abilities across various tasks, such as code generation, problem-solving and reasoning. Existing benchmarks evaluate tasks in isolation, yet the extent to which LLMs can understand prose-style tasks, identify the underlying problems, and then generate appropriate code solutions is still unexplored. Addressing this gap, we introduce PECC, a novel benchmark derived from Advent Of Code (AoC) challenges and Project Euler, including 2396 problems. Unlike conventional benchmarks, PECC requires LLMs to interpret narrative-embedded problems, extract requirements, and generate executable code. A key feature of our dataset is the complexity added by natural language prompting in chat-based evaluations, mirroring real-world instruction ambiguities. Results show varying model performance between narrative and neutral problems, with specific challenges in the Euler math-based subset with GPT-3.5-Turbo passing 50% of the AoC challenges and only 8% on the Euler problems. By probing the limits of LLMs' capabilities, our benchmark provides a framework to monitor and assess the subsequent progress of LLMs as a universal problem solver.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
PECCClaude 3 HaikuPass@327.67Unverified
PECCGPT-3.5 TurboPass@323.75Unverified
PECCcodechat-bisonPass@311.39Unverified
PECCchat-bisonPass@38.48Unverified
PECCMixtral-8x7B-InstructPass@38.35Unverified
PECCPhi-3-mini-128k-instructPass@37.18Unverified
PECCWizardLM-2-7BPass@33.72Unverified
PECCLlama-3-8B-InstructPass@33.1Unverified

Reproductions