Recognizing Sound Events - 2019
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore the cutting-edge developments in automatic sound classification through this comprehensive lecture by Dan Ellis from Google's Sound Understanding team. Delve into the application of vision-inspired deep neural networks for classifying the 'AudioSet' ontology of approximately 600 sound events, encompassing speech, music, and environmental sounds. Learn about related applications in bioacoustics and cross-modal learning, and discover insights from a recent Kaggle competition run in collaboration with UPF Barcelona. Gain knowledge about the upcoming release of a pretrained model aimed at making state-of-the-art generic sound recognition widely accessible. This talk, presented at the Center for Language & Speech Processing (CLSP) at JHU, offers valuable insights into the latest advancements in sound event recognition and its practical applications.
Syllabus
Recognizing Sound Events -- Dan Ellis (Google) - 2019
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Sequences, Time Series and PredictionDeepLearning.AI via Coursera A Beginners Guide to Data Science
Udemy Artificial Neural Networks(ANN) Made Easy
Udemy Makine Mühendisleri için Derin Öğrenme
Udemy Customer Analytics in Python
Udemy