C++ Generative AI Inference - Production Ready Speed and Control
Offered By: ChemicalQDevice via YouTube
Course Description
Overview
Explore methods to enhance speed and control of Generative AI inferencing using C++ in this comprehensive video. Delve into object-oriented programming techniques that provide clear structure and code reusability. Learn about various C++ libraries and frameworks for running generative AI models, including TensorFlow Lite, OpenVINO, Caffe2, CNTK, MLPACK, Dlib, and Eigen. Discover how to optimize computer vision and deep learning applications for embedded systems and mobile devices. Gain insights into building and running neural networks using pure C++ APIs, and understand the trade-offs between platform-specific optimizations and general-purpose support across multiple platforms. Explore the potential of leveraging efficient linear algebra libraries as building blocks for custom machine learning projects.
Syllabus
C++ Generative AI Inference: Production Ready Speed and Control
Taught by
ChemicalQDevice
Related Courses
Intel® Edge AI Fundamentals with OpenVINO™Intel via Udacity tinyML Vision Challenge - Intel-Luxonis DepthAI Platform Overview
tinyML via YouTube Machine Learning in Fastly's Compute@Edge
Linux Foundation via YouTube End-to-End AI Developer Journey with Containerized Assets Using Intel DevCatalog and DevCloud
Docker via YouTube Accelerate Your Deep Learning Inferencing with the Intel DL Boost Technology
EuroPython Conference via YouTube