Set Up a Llama2 Endpoint for Your LLM App in OctoAI
Offered By: Docker via YouTube
Course Description
Overview
          Learn to set up a Llama2 endpoint in OctoAI for building a simple LLM application using the RAG framework in this 58-minute workshop from the Docker AI/ML Hackathon 2023. Follow along as the OctoML team demonstrates how to clone a model template, create a custom endpoint, define cost, latency, and hardware preferences, and test the LLM in a sample application. Access the accompanying GitHub repository for hands-on practice and additional resources.
        
Syllabus
Set up a Llama2 endpoint for your LLM app in OctoAI
Taught by
Docker
Related Courses
Cloud Computing Applications, Part 1: Cloud Systems and InfrastructureUniversity of Illinois at Urbana-Champaign via Coursera Introduction to Cloud Infrastructure Technologies
Linux Foundation via edX Introduction aux conteneurs
Microsoft Virtual Academy via OpenClassrooms The Docker for DevOps course: From development to production
Udemy Windows Server 2016: Virtualization
Microsoft via edX