The article presents a new framework called PROMST for optimizing prompts in multi-step tasks for Large Language Models (LLMs). Unlike previous methods, which work well for single-step tasks, PROMST is designed to handle the complexity of multi-step tasks. The framework uses a genetic algorithm that incorporates human feedback about potential errors to improve the prompts automatically. It outperforms both human-engineered prompts and other prompt optimization methods, showing an average improvement of 27.7% and 28.2% on GPT-3.5 and GPT-4 respectively. The method can also be adjusted to align with individual preferences, making it versatile for different users.
Publication date: 15 Feb 2024
Project Page: https://yongchao98.github.io/PROMST/
Paper: https://arxiv.org/pdf/2402.08702