Pre-training and Fine-tuning of Code Generation Models
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore the behind-the-scenes process of building and training large code models like StarCoder in this keynote presentation. Delve into the remarkable abilities of large language models trained on code for code completion and synthesis from natural language descriptions. Learn about the development of StarCoder, a robust 15B Code Generation model trained across 80+ programming languages, while incorporating responsible AI practices. Discover how to leverage these models using open-source libraries such as transformers and PEFT, and gain insights into efficient deployment strategies. Gain valuable knowledge about the pre-training and fine-tuning techniques used in code generation models, presented by Loubna Ben-Allal, a Machine Learning Engineer from Hugging Face.
Syllabus
Keynote: Pre-training and Fine-tuning of Code Generation Models - Loubna Ben-Allal, Hugging Face
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent