Set Up a Llama2 Endpoint for Your LLM App in OctoAI
Offered By: Docker via YouTube
Course Description
Overview
Learn to set up a Llama2 endpoint in OctoAI for building a simple LLM application using the RAG framework in this 58-minute workshop from the Docker AI/ML Hackathon 2023. Follow along as the OctoML team demonstrates how to clone a model template, create a custom endpoint, define cost, latency, and hardware preferences, and test the LLM in a sample application. Access the accompanying GitHub repository for hands-on practice and additional resources.
Syllabus
Set up a Llama2 endpoint for your LLM app in OctoAI
Taught by
Docker
Related Courses
Developing a Tabular Data ModelMicrosoft via edX Data Science in Action - Building a Predictive Churn Model
SAP Learning Serverless Machine Learning with Tensorflow on Google Cloud Platform 日本語版
Google Cloud via Coursera Intro to TensorFlow em Português Brasileiro
Google Cloud via Coursera Serverless Machine Learning con TensorFlow en GCP
Google Cloud via Coursera