YoVDO

This AI Recognises Moods in Songs and Explains How It Does It

Offered By: Valerio Velardo - The Sound of AI via YouTube

Tags

Explainable AI Courses Artificial Intelligence Courses Deep Learning Courses

Course Description

Overview

Explore a deep learning architecture designed to recognize moods in songs and explain its predictions in this 38-minute video analysis. Break down the research paper "Towards Explainable Music Emotion Recognition: The Route via Mid-Level Features" published at ISMIR 2019 by University of Linz researchers. Delve into the problem of automatic music emotion recognition, the use of mid-level perceptual features, and three proposed architectures for predicting emotion. Examine experimental results, methods for explaining the model's decisions, and potential applications of this technology. Gain insights into VGG networks and join the discussion in The Sound of AI community to further explore this fascinating intersection of artificial intelligence and music emotion recognition.

Syllabus

Intro
Join the community!
Automatic music emotion recognition
What do we feed to the network?
Problem
Idea
What are mid-level perceptual features?
Datasets
Three architectures to predict emotion
Architecture details
Experimental results
How can we explain the results?
Weights of linear layer
Song explainability
Possible applications
What have we learnt?
Join the discussion!


Taught by

Valerio Velardo - The Sound of AI

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX