YoVDO

LLaMA 2 and Meta AI Projects - Interview with Thomas Scialom

Offered By: Aleksa Gordić - The AI Epiphany via YouTube

Tags

LLaMA (Large Language Model Meta AI) Courses AI Ethics Courses Supervised Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into an hour-long interview with Thomas Scialom, the lead of LLaMA 2 at Meta, as he shares insights on his career journey and contributions to major AI projects. Explore topics including the transformative potential of large language models, the importance of supervised fine-tuning, and the role of human preference in AI development. Gain valuable perspectives on the future of AI technology and participate in a Q&A session covering various aspects of language models and their applications.

Syllabus

Thomas''s story
The Sci-Fi moment
Large is all you need
Supervised fine-tuning
Human Preference
What happens next?
Q&A


Taught by

Aleksa Gordić - The AI Epiphany

Related Courses

Big Self-Supervised Models Are Strong Semi-Supervised Learners
Yannic Kilcher via YouTube
A Transformer-Based Framework for Multivariate Time Series Representation Learning
Launchpad via YouTube
Inside ChatGPT- Unveiling the Training Process of OpenAI's Language Model
Krish Naik via YouTube
Fine Tune GPT-3.5 Turbo
Data Science Dojo via YouTube
Yi 34B: The Rise of Powerful Mid-Sized Models - Base, 200k, and Chat
Sam Witteveen via YouTube