This AI Recognises Moods in Songs and Explains How It Does It
Offered By: Valerio Velardo - The Sound of AI via YouTube
Course Description
Overview
Explore a deep learning architecture designed to recognize moods in songs and explain its predictions in this 38-minute video analysis. Break down the research paper "Towards Explainable Music Emotion Recognition: The Route via Mid-Level Features" published at ISMIR 2019 by University of Linz researchers. Delve into the problem of automatic music emotion recognition, the use of mid-level perceptual features, and three proposed architectures for predicting emotion. Examine experimental results, methods for explaining the model's decisions, and potential applications of this technology. Gain insights into VGG networks and join the discussion in The Sound of AI community to further explore this fascinating intersection of artificial intelligence and music emotion recognition.
Syllabus
Intro
Join the community!
Automatic music emotion recognition
What do we feed to the network?
Problem
Idea
What are mid-level perceptual features?
Datasets
Three architectures to predict emotion
Architecture details
Experimental results
How can we explain the results?
Weights of linear layer
Song explainability
Possible applications
What have we learnt?
Join the discussion!
Taught by
Valerio Velardo - The Sound of AI
Related Courses
Explainable AI: Scene Classification and GradCam VisualizationCoursera Project Network via Coursera Artificial Intelligence Privacy and Convenience
LearnQuest via Coursera Natural Language Processing and Capstone Assignment
University of California, Irvine via Coursera Modern Artificial Intelligence Masterclass: Build 6 Projects
Udemy Data Science for Business
DataCamp