CrochetBench: Can Vision-Language Models Move from Describing to Doing in Crochet Domain?
Peiyu Li, Xiaobao Huang, Ting Hua, Nitesh V. Chawla
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/peiyu-georgia-li/crochetbenchOfficial★ 9
Abstract
While multimodal large language models can describe visual content, their ability to generate executable procedures remains underexplored. CrochetBench presented in this paper evaluates this shift from describing to doing through fine-grained procedural reasoning in crochet: models must recognize stitches, select structurally appropriate instructions, and generate compilable procedures. We adopt the CrochetPARADE DSL as our intermediate representation, enabling structural validation and functional evaluation via execution. The benchmark covers tasks including stitch classification, instruction grounding, and both natural language and image-to-DSL translation. Across all tasks, performance sharply decreases as the evaluation shifts from surface-level similarity to executable correctness, revealing limitations in long-range symbolic reasoning and 3D-aware procedural synthesis. Our proposed CrochetBench offers a new lens for assessing procedural competence in multimodal models and highlights the gap between surface-level understanding and executable precision in real-world creative domains. Code is available at https://anonymous.4open.science/r/crochet-82E6/README.md.