YoVDO

Building Data-Efficient and Reliable Applications with Large Language Models

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Instruction-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges and solutions in developing reliable applications using Large Language Models (LLMs) in this 45-minute talk by Xinyi Cindy Wang from the Center for Language & Speech Processing at JHU. Delve into the complexities of adapting LLMs for specialized domains and personalized user needs, particularly in multilingual contexts. Discover data-efficient tuning methods and a novel factuality evaluation framework designed to address the limitations of LLMs trained on web data. Learn about Wang's research on multilingual instruction-tuning for Gemini and generative models used in Google search, as well as her contributions to data selection, representation, and model adaptation for multilingual natural language processing during her PhD at Carnegie Mellon University.

Syllabus

Xinyi Cindy Wang: Building Data-Efficient and Reliable Applications with Large Language Models


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

Towards Reliable Use of Large Language Models - Better Detection, Consistency, and Instruction-Tuning
Simons Institute via YouTube
Role of Instruction-Tuning and Prompt Engineering in Clinical Domain - MedAI 125
Stanford University via YouTube
Generative AI Advance Fine-Tuning for LLMs
IBM via Coursera
SeaLLMs - Large Language Models for Southeast Asia
VinAI via YouTube
Fine-tuning LLMs with Hugging Face SFT and QLoRA - LLMOps Techniques
LLMOps Space via YouTube