The article discusses the development of MetaMath, a fine-tuned language model for mathematical reasoning. Despite the success of Large Language Models (LLMs), they often fall short in solving complex mathematical problems. To address this, the authors propose MetaMath, which is trained on a new dataset called MetaMathQA. This dataset is created by rewriting mathematical questions from multiple perspectives. Experimental results show that MetaMath outperforms other open-source LLMs significantly on two popular benchmarks for mathematical reasoning, GSM8K and MATH. The authors have released the MetaMathQA dataset, MetaMath models of different sizes, and the training code for public use.

 

Publication date: 22 Sep 2023
Project Page: https://meta-math.github.io/
Paper: https://arxiv.org/pdf/2309.12284