Recognizing Sound Events - 2019
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore the cutting-edge developments in automatic sound classification through this comprehensive lecture by Dan Ellis from Google's Sound Understanding team. Delve into the application of vision-inspired deep neural networks for classifying the 'AudioSet' ontology of approximately 600 sound events, encompassing speech, music, and environmental sounds. Learn about related applications in bioacoustics and cross-modal learning, and discover insights from a recent Kaggle competition run in collaboration with UPF Barcelona. Gain knowledge about the upcoming release of a pretrained model aimed at making state-of-the-art generic sound recognition widely accessible. This talk, presented at the Center for Language & Speech Processing (CLSP) at JHU, offers valuable insights into the latest advancements in sound event recognition and its practical applications.
Syllabus
Recognizing Sound Events -- Dan Ellis (Google) - 2019
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent