YoVDO

IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation - M2.6

Offered By: Association for Computing Machinery (ACM) via YouTube

Tags

Parameter-Efficient Fine-Tuning Courses Artificial Intelligence Courses Machine Learning Courses Information Retrieval Courses PEFT Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 14-minute conference talk from SIGIR 2024 focusing on IISAN, an innovative approach for efficiently adapting multimodal representation in sequential recommendation systems. Delve into the research presented by authors Junchen Fu, Xuri Ge, Xin Xin, Alexandros Karatzoglou, Ioannis Arapakis, Jie Wang, and Joemon Jose as they discuss the implementation of Decoupled PEFT (Parameter-Efficient Fine-Tuning) techniques. Gain insights into how this method enhances the performance and efficiency of multimodal recommender systems, potentially revolutionizing the field of sequential recommendations.

Syllabus

SIGIR 2024 M2.6 [fp] IISAN: Efficiently Adapting Multimodal Representation for Sequential Rec


Taught by

Association for Computing Machinery (ACM)

Related Courses

Large Language Models: Foundation Models from the Ground Up
Databricks via edX
Fine-Tuning LLMs with PEFT and LoRA
Sam Witteveen via YouTube
Pre-training and Fine-tuning of Code Generation Models
CNCF [Cloud Native Computing Foundation] via YouTube
MLOps: Fine-tuning Mistral 7B with PEFT, QLora, and MLFlow
The Machine Learning Engineer via YouTube
MLOps MLflow: Fine-Tuning Mistral 7B con PEFT y QLora - EspaƱol
The Machine Learning Engineer via YouTube