MIT Logo

Fundamentals of Music Processing (21M.387 / 21M.587 / 6.3020)

See a recent Syllabus.

Fundamentals of Music Processing is offered:

  • Fall 2025. TR 1:00-2:30 (most likely)
  • 4-270
  • Instructor: TBD
  • TA: TBD

Prerequisites:

  • 6.100 or intermediate-level python programming
  • 6.300 or signals and systems and Fourier transforms
  • 21M.051 or fundamentals of music theory – reading music and a basic understanding of scales and chords

Fundamentals of Music Processing deals with music analysis in the audio domain as a signal-processing problem. In the same way that speech processing uses signal analysis to understand spoken words, music processing uses signal analysis on music waveforms to understand higher level musical structure.

We begin by introducing frequency analysis using the Discrete Fourier Transform, and its commonly used relative, the Short-Time Fourier Transform.  From there we study features extraction methods such as chroma analysis and onset detection, and examine analysis tools like dynamic-time-warping and self-similarity matrices. These techniques serve to develop a variety of music analysis applications, including:

  • Onset classification
  • Temporal alignment of different renditions of the same music
  • Automatic chord recognition and key detection
  • Tempo and beat tracking
  • Structural analysis of music
  • Content-based audio retrieval
  • Music-based audio decomposition

Students practice these techniques with in-class lab exercises and coding assignments in python. Students taking the graduate version (21M.587) complete an independent final project as well.

The class uses a required text, Fundamentals of Music Processing, by Meinard Müller.