Running a Question-Answering System on Ray Serve at Deepset
Offered By: Anyscale via YouTube
Course Description
Overview
Explore the process of running a question-answering system on Ray Serve in this 31-minute talk from Anyscale. Delve into the key architectural components of question-answering systems, including data stores, indexing pipelines, and querying pipelines. Learn about Haystack, an open-source framework that connects multiple transformer state-of-the-art NLP models into a single pipeline. Discover how to deploy GPU-empowered inference using Ray, assemble NLP models into pipelines, run Hugging Face models on Ray Serve, deploy NLP model pipelines, and access persistent storage from code deployed on Ray Serve. Gain valuable insights into enhancing your question-answering systems and leveraging Ray Serve for improved performance and scalability.
Syllabus
Running a question-answering system on Ray Serve at Deepset
Taught by
Anyscale
Related Courses
Hugging Face on Azure - Partnership and Solutions AnnouncementMicrosoft via YouTube Question Answering in Azure AI - Custom and Prebuilt Solutions - Episode 49
Microsoft via YouTube Open Source Platforms for MLOps
Duke University via Coursera Masked Language Modelling - Retraining BERT with Hugging Face Trainer - Coding Tutorial
rupert ai via YouTube Masked Language Modelling with Hugging Face - Microsoft Sentence Completion - Coding Tutorial
rupert ai via YouTube