Recognizing Sound Events - 2019
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore the cutting-edge developments in automatic sound classification through this comprehensive lecture by Dan Ellis from Google's Sound Understanding team. Delve into the application of vision-inspired deep neural networks for classifying the 'AudioSet' ontology of approximately 600 sound events, encompassing speech, music, and environmental sounds. Learn about related applications in bioacoustics and cross-modal learning, and discover insights from a recent Kaggle competition run in collaboration with UPF Barcelona. Gain knowledge about the upcoming release of a pretrained model aimed at making state-of-the-art generic sound recognition widely accessible. This talk, presented at the Center for Language & Speech Processing (CLSP) at JHU, offers valuable insights into the latest advancements in sound event recognition and its practical applications.
Syllabus
Recognizing Sound Events -- Dan Ellis (Google) - 2019
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Introduction to Digital Sound DesignEmory University via Coursera Foundations of Wavelets and Multirate Digital Signal Processing
Indian Institute of Technology Bombay via Swayam iOS Development for Creative Entrepreneurs
University of California, Irvine via Coursera Deploying TinyML
Harvard University via edX Digital Signal Processing
École Polytechnique Fédérale de Lausanne via Coursera