Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Orca-Math showcases the benefits of fine-tuning SLMs with multi-agent flows

A research team at Microsoft fine-tuned Mistral-7B to create Orca-Math, a small language model achieving state-of-the-art performance in solving grade school math problems (GSM8k). Orca-Math has a performance surpassing many LLMs, including math-specific models MetaMath-70B and WizardMa8th-70B.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Orca-Math showcases the benefits of fine-tuning SLMs with multi-agent flows
Image generated using ideogram.ai

General small language models struggle when asked to solve grade school math problems. This task is often used as a benchmark to evaluate the capabilities of larger foundational models. It has become increasingly evident that small language models can efficiently achieve state-of-the-art performance in specific domains with the help of fine-tuning. Following this trend, a research team at Microsoft developed Orca-Math by fine-tuning Mistral-7B using a small 200,0000-problem dataset and a multi-agent flow to achieve an 86.81% score on GSM8k pass@1. Most models achieving a score over 80% in the GSM8k have over 30 billion parameters.

To put Orca-Math's performance into perspective, it scored higher than general models LLAMA-2-70, Gemini Pro, GPT-3.5, and math-specific models MetaMath-70B and WizardMa8th-70B. Moreover, the small size of the dataset and the fact that it was trained to solve problems without using external tools, verifiers, or ensembling means that Orca-Math's training is faster and cheaper than many alternatives. A multi-agent flow generates synthetic high-quality training data from the small dataset, for instance, by using an agent to examine the problem and suggest modifications to make it more complex, passing it on to another agent that reviews the suggestions and incorporates them into revised, more challenging problems based on the original one. This process can be iterated to increase the problems' complexity even more, and a third agent can be incorporated to ensure that the math problems are solvable by creating a solution.

Once the multi-agent flow is set, the model is trained using a teacher-student paradigm where a larger model (the teacher) creates demonstrations of solutions for the smaller model to learn from using AgentInstruct. Then, the SLM is left to solve problems on its own, where the SLM can create multiple solutions for a single math problem. The larger model goes over the solutions, offering feedback on them. If the SLM cannot solve a problem correctly after several attempts, the SLM is re-taught using one of the teacher's solutions. Finally, the feedback is used to create preference data so the model learns to discern a good solution from a bad one. This process can be repeated several times to continue refining the SLM's capabilities.

To support the continued research on the training and specialization of small language models in specific domains, the research team behind Orca-Math is releasing the dataset used for training, as well as a report describing the procedure.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Updated

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More