SOTAVerified

Towards a Probabilistic Framework for Analyzing and Improving LLM-Enabled Software

2025-01-10Unverified0· sign in to hype

Juan Manuel Baldonado, Flavia Bonomo-Braberman, Víctor Adrián Braberman

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Ensuring the reliability and verifiability of large language model (LLM)-enabled systems remains a significant challenge in software engineering. We propose a probabilistic framework for systematically analyzing and improving these systems by modeling and refining distributions over clusters of semantically equivalent outputs. This framework facilitates the evaluation and iterative improvement of Transference Models--key software components that utilize LLMs to transform inputs into outputs for downstream tasks. To illustrate its utility, we apply the framework to the autoformalization problem, where natural language documentation is transformed into formal program specifications. Our case illustrates how distribution-aware analysis enables the identification of weaknesses and guides focused alignment improvements, resulting in more reliable and interpretable outputs. This principled approach offers a foundation for addressing critical challenges in the development of robust LLM-enabled systems.

Tasks

Reproductions