The program called statement curriculum learning features the capabilities that enable manual collection of sets of declarations of different difficulty levels, where the most difficult statements are identical to the benchmark of their target. However, as demonstrated, a neural prover itself is fragile and can only prove a few of these statements.
The company addresses the problem of a limitless action space obstacle by sampling actions from a language model as they search for confirmation. Language models have the competence to develop the tactic calls, as well as the original mathematical terms that are often required as arguments. The team tries by experiment to show that, when training procedure is able to solve a curriculum of increasingly difficult problems, eventually generalizing to the set of problems they care about.
Meantime results are promising; deep learning models are capable of effortless mathematical reasoning when collaborating with a formal system. The developers claim that they are still far from best-student practice on these competitions, however.
Data Phoenix Newsletter
Join the newsletter to receive the latest updates in your inbox.