From Paper to Product - How We Implemented BERT
Offered By: MLCon | Machine Learning Conference via YouTube
Course Description
Overview
Explore the journey of implementing BERT, a cutting-edge natural language processing model, in a real-world product development scenario. Dive into the challenges, successes, and lessons learned as a team transforms theoretical concepts into a functional natural language generation application. Learn about the decision-making process behind choosing BERT, alternative approaches considered, and the intricacies of training a custom version of the network. Gain valuable insights into common pitfalls to avoid and unexpected discoveries made during the implementation process. This conference talk provides a comprehensive look at bridging the gap between academic research and practical application in the field of NLP, offering both technical details and strategic considerations for professionals working with advanced language models.
Syllabus
Introduction
About the talk
How did it start
What happens when the customer is waited
Good old AI
Visionary
English
German
Product
Customer Feedback
Part of Speech Tagging
Architecture
Stanford NLP
Reinventing the wheel
What is BERT
What is supervised training
Unsupervised learning
Unsupervised training
Semisupervised training
Input structure
Transformer block
What is good
What is bad
How did we do it
BERT parameter tuning
Preprocessing
Word pieces
Morphemes vs morphs
Dictionary size
How does it look
What did we save
Training
Training Results
Future Plans
Backend
ZukaText
Whats next
Taught by
MLCon | Machine Learning Conference
Related Courses
Machine LearningUniversity of Washington via Coursera Machine Learning
Stanford University via Coursera Machine Learning
Georgia Institute of Technology via Udacity Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity