Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Large Language Models for Program Synthesis

This talk will be about large language models for code generation.

Dmitry Spodarets profile image
by Dmitry Spodarets

Recent large-scale language models have demonstrated an impressive ability to generate code, and are now able to complete simple programming tasks. However, these models still perform poorly when evaluated on more complex, unseen problems that require problem-solving skills beyond simply translating instructions into code. For example, competitive programming problems which require an understanding of algorithms and complex natural language remain extremely challenging. In this talk, I will discuss 2 lines of works on improving large language models (LLMs) for code generation. First, I will discuss AlphaCode, which achieved on average a ranking of top 54.3% in several Codeforces competitions with more than 5,000 participants. In the second part, I will discuss our recent work Self-Debugging, which teaches LLMs to debug their own predicted code. In particular, we demonstrate that Self-Debugging can teach LLMs to perform rubber duck debugging; i.e., without any feedback on the code correctness or error messages, the model is able to identify its mistakes by explaining the generated code line-by-line. Self-Debugging achieves the state-of-the-art performance on several code generation tasks, including text-to-SQL generation, code translation, and synthesizing short Python functions from text descriptions. Meanwhile, by leveraging feedback messages and reusing failed predictions, Self-Debugging notably improves sample efficiency, and can match or outperform baseline models that generate more than 10x candidate programs.

Speaker:
Xinyun Chen is a senior research scientist at Google DeepMind. She obtained her Ph.D. in Computer Science from University of California, Berkeley. Her research lies at the intersection of deep learning, programming languages, and security. Her research focuses on large language models, learning-based program synthesis and adversarial machine learning. Her work SpreadsheetCoder for spreadsheet formula prediction was integrated into Google Sheets, and her work AlphaCode was featured as the front cover in Science Magazine.

Dmitry Spodarets profile image
by Dmitry Spodarets

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More