Set Up a Llama2 Endpoint for Your LLM App in OctoAI
Offered By: Docker via YouTube
Course Description
Overview
Learn to set up a Llama2 endpoint in OctoAI for building a simple LLM application using the RAG framework in this 58-minute workshop from the Docker AI/ML Hackathon 2023. Follow along as the OctoML team demonstrates how to clone a model template, create a custom endpoint, define cost, latency, and hardware preferences, and test the LLM in a sample application. Access the accompanying GitHub repository for hands-on practice and additional resources.
Syllabus
Set up a Llama2 endpoint for your LLM app in OctoAI
Taught by
Docker
Related Courses
Google BARD and ChatGPT AI for Increased ProductivityUdemy Bringing LLM to the Enterprise - Training From Scratch or Just Fine-Tune With Cerebras-GPT
Prodramp via YouTube Generative AI and Long-Term Memory for LLMs
James Briggs via YouTube Extractive Q&A With Haystack and FastAPI in Python
James Briggs via YouTube OpenAssistant First Models Are Here! - Open-Source ChatGPT
Yannic Kilcher via YouTube