Set Up a Llama2 Endpoint for Your LLM App in OctoAI
Offered By: Docker via YouTube
Course Description
Overview
Learn to set up a Llama2 endpoint in OctoAI for building a simple LLM application using the RAG framework in this 58-minute workshop from the Docker AI/ML Hackathon 2023. Follow along as the OctoML team demonstrates how to clone a model template, create a custom endpoint, define cost, latency, and hardware preferences, and test the LLM in a sample application. Access the accompanying GitHub repository for hands-on practice and additional resources.
Syllabus
Set up a Llama2 endpoint for your LLM app in OctoAI
Taught by
Docker
Related Courses
LLaMA2 for Multilingual Fine TuningSam Witteveen via YouTube AI Engineer Skills for Beginners: Code Generation Techniques
All About AI via YouTube Training and Evaluating LLaMA2 Models with Argo Workflows and Hera
CNCF [Cloud Native Computing Foundation] via YouTube LangChain Crash Course - 6 End-to-End LLM Projects with OpenAI, LLAMA2, and Gemini Pro
Krish Naik via YouTube Docker for Machine Learning, AI, and Data Science - DockerCon 2023 Workshop
Docker via YouTube