Large Language Molecular Representation and Learning
Offered By: Valence Labs via YouTube
Course Description
Overview
Explore the potential of molecular machine learning for enhancing property prediction and drug discovery in this comprehensive lecture. Delve into the challenges of obtaining labeled molecule data and the debate surrounding molecular representation methods. Learn about MolCLR, a self-supervised learning framework utilizing graph neural networks to process millions of unlabeled molecules. Examine the effectiveness of textual representations for polymers, Metal Organic Frameworks, catalysis systems, and organic molecules compared to graph representations. Discover how pre-trained language models like BERT and RoBERTa can be leveraged for molecular property prediction. Gain insights into multimodal learning approaches for molecules and their potential to improve the learning process. The lecture covers topics such as molecular property prediction, ablation studies, transformer models, MOFormer, textual format input exploration, prompt engineering, and concludes with a Q&A session.
Syllabus
- Intro
- Molecular property prediction
- Ablation Study
- Improving MolCLR
- Transformers
- MOFormer
- Exploring features through textual format input
- Prompt engineering
- Conclusions
- Q+A
Taught by
Valence Labs
Related Courses
Stanford Seminar - Audio Research: Transformers for Applications in Audio, Speech and MusicStanford University via YouTube How to Represent Part-Whole Hierarchies in a Neural Network - Geoff Hinton's Paper Explained
Yannic Kilcher via YouTube OpenAI CLIP - Connecting Text and Images - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Learning Compact Representation with Less Labeled Data from Sensors
tinyML via YouTube Human Activity Recognition - Learning with Less Labels and Privacy Preservation
University of Central Florida via YouTube