SOTAVerified

Introducing MAPO: Momentum-Aided Gradient Descent Prompt Optimization

2024-10-25Unverified0· sign in to hype

Anthony Cui, Pranav Nandyalam, Ethan Cheung, Kevin Zhu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Momentum-Aided Prompt Optimization (MAPO) enhances the efficiency and efficacy of prompt optimization for Large Language Models (LLMs). Building on ProTeGi, MAPO uses positive natural language "gradients" and a momentum-based extension to refine prompts effectively. By tracking gradient history, MAPO avoids local minima and oscillations. It also utilizes beam search and an Upper Confidence Bound (UCB) algorithm for balanced candidate expansion and selection. Benchmark testing shows that MAPO achieves faster convergence time with fewer API calls and higher F1 scores than ProTeGi, proving it as a robust and scalable solution for automated prompt engineering in LLMs.

Tasks

Reproductions