Webinar "Large Language Models for Program Synthesis"
Data Phoenix team invites you all to our upcoming webinar that’s going to take place on October 16, 8 am PST / 5 pm CET.
- Topic: Large Language Models for Program Synthesis
- Speaker: Xinyun Chen (Research Scientist at Google DeepMind)
- Participation: free (but you’ll be required to register)
Recent large-scale language models have demonstrated an impressive ability to generate code, and are now able to complete simple programming tasks. However, these models still perform poorly when evaluated on more complex, unseen problems that require problem-solving skills beyond simply translating instructions into code. For example, competitive programming problems which require an understanding of algorithms and complex natural language remain extremely challenging. In this talk, I will discuss 2 lines of works on improving large language models (LLMs) for code generation. First, I will discuss AlphaCode, which achieved on average a ranking of top 54.3% in several Codeforces competitions with more than 5,000 participants. In the second part, I will discuss our recent work Self-Debugging, which teaches LLMs to debug their own predicted code. In particular, we demonstrate that Self-Debugging can teach LLMs to perform rubber duck debugging; i.e., without any feedback on the code correctness or error messages, the model is able to identify its mistakes by explaining the generated code line-by-line. Self-Debugging achieves the state-of-the-art performance on several code generation tasks, including text-to-SQL generation, code translation, and synthesizing short Python functions from text descriptions. Meanwhile, by leveraging feedback messages and reusing failed predictions, Self-Debugging notably improves sample efficiency, and can match or outperform baseline models that generate more than 10x candidate programs.
Speaker:
Xinyun Chen is a senior research scientist at Google DeepMind. She obtained her Ph.D. in Computer Science from University of California, Berkeley. Her research lies at the intersection of deep learning, programming languages, and security. Her research focuses on large language models, learning-based program synthesis and adversarial machine learning. Her work SpreadsheetCoder for spreadsheet formula prediction was integrated into Google Sheets, and her work AlphaCode was featured as the front cover in Science Magazine.