Pre-training and Fine-tuning of Code Generation Models
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore the behind-the-scenes process of building and training large code models like StarCoder in this keynote presentation. Delve into the remarkable abilities of large language models trained on code for code completion and synthesis from natural language descriptions. Learn about the development of StarCoder, a robust 15B Code Generation model trained across 80+ programming languages, while incorporating responsible AI practices. Discover how to leverage these models using open-source libraries such as transformers and PEFT, and gain insights into efficient deployment strategies. Gain valuable knowledge about the pre-training and fine-tuning techniques used in code generation models, presented by Loubna Ben-Allal, a Machine Learning Engineer from Hugging Face.
Syllabus
Keynote: Pre-training and Fine-tuning of Code Generation Models - Loubna Ben-Allal, Hugging Face
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Programming LanguagesUniversity of Virginia via Udacity Compilers
Stanford University via Coursera Programming Languages, Part A
University of Washington via Coursera CSCI 1730 - Introduction to Programming Languages
Brown University via Independent Intro to Java Programming
San Jose State University via Udacity