This research introduces SEGO (SEquential subGoal Optimization), a novel framework designed to enhance the problem-solving capabilities of Large Language Models (LLMs) in the field of mathematics. The key idea behind SEGO is the identification and optimization of subgoals, which are smaller, manageable problems derived from the larger problem. The framework establishes a theoretical connection between the subgoal breakdown process and the probability of solving the problem. The authors demonstrate the effectiveness of SEGO through experiments on two benchmarks, GSM8K and MATH, where it outperforms existing methods.

 

Publication date: 19 Oct 2023
Project Page: https://github.com/zhaoxlpku/SEGO
Paper: https://arxiv.org/pdf/2310.12960