Extracting Information From Music Signals
Offered By: University of Victoria via Kadenze
Course Description
Overview
The course introduces audio signal processing concepts motivated by examples from MIR research. More specifically students will learn about spectral analysis and time-frequency representations in general, monophonic pitch estimation, audio feature extraction, beat tracking, and tempo estimation.
Syllabus
- DFT and Time-Frequency Representations
- In This session, we will learn about Sampling, Quantization, RMS, and Loudness. We will also cover DFT, Hilbert Spaces, and Spectrograms.
- Monophonic Pitch Detection
- Pitch vs Fundamental Frequency, Time-domain, Frequency-domain, Perceptual Models, Overview of applications (Query-by-Humming, Auto-tunining) will be covered in this session.
- Time, Frequency, and Sinusoids
- In this session, we will cover Phasors, Sinusoids, and Complex Numbers.
- Rhythm Analysis
- This session is about Tempo estimation, beat tracking, drum transcription, pattern detection.
- Audio Feature Extraction
- We will go over Spectral Features, Mel-Frequency Cepstral Coefficients, temporal aggregation, chroma and pitch profiles.
Taught by
George Tzanetakis
Tags
Related Courses
Audio Signal Processing for Music ApplicationsStanford University via Coursera Binaural Hearing for Robots
Inria (French Institute for Research in Computer Science and Automation) via France Université Numerique Inside the Music & Video Tech Industry
Kadenze Real-Time Audio Signal Processing in Faust
Stanford University via Kadenze Multi-Scale Multi-Band DenseNets for Audio Source Separation
Launchpad via YouTube