Training AI to Code Using Project CodeNet - Largest Code Dataset
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore a comprehensive conference talk on leveraging Project CodeNet, the largest code dataset, to train AI for coding tasks. Delve into the capabilities of this massive dataset containing 14 million code samples across 55 programming languages. Learn how Project CodeNet enables advanced machine learning applications for code, including similarity detection, semantic context extraction, and cross-language translation. Discover the practical implementation of Project CodeNet using the Machine Learning Exchange (MLX), a Linux Foundation AI & Data Sandbox Project. Follow a three-step process to classify code and analyze complexity using DataShim, Jupyter notebooks on Kubernetes, and containerized models for inference. Gain insights into how MLX generates Kubeflow Pipelines on Tekton, simplifying the workflow for data scientists. Understand how teams can utilize curated datasets, example notebooks, and pre-trained models to integrate machine learning and AI into coding practices efficiently.
Syllabus
Training AI To Code Using The Largest Code Dataset (Project CodeNet) - Tommy Li & Animesh Singh, IBM
Taught by
Linux Foundation
Tags
Related Courses
Building End-to-end Machine Learning Workflows with KubeflowPluralsight Smart Analytics, Machine Learning, and AI on GCP
Pluralsight Leveraging Cloud-Based Machine Learning on Google Cloud Platform: Real World Applications
LinkedIn Learning Distributed TensorFlow - TensorFlow at O'Reilly AI Conference, San Francisco '18
TensorFlow via YouTube KFServing - Model Monitoring with Apache Spark and Feature Store
Databricks via YouTube