Features¶
This module provides methods that allows to analyze symbolic music.
Pitch¶
|
Extracts the highest and lowest pitches from a list of notes. |
|
Computes the difference between the highest and the lowest pitches in a list of notes. |
|
Counts the total number of onsets in the |
|
Counts the total number of different note classes |
|
Counts the total number of different note classes |
|
Calculates the number of different pitches or Pitch Counts (PC) of a list of Notes. |
|
Returns the latest note name (NoteClassBase) in the sequence. |
|
Computes the Pitch Class Histogram (PCH) of a list of musicaiz |
Computes the Pitch Class Transition Matrix (PCTM) of a list of musicaiz |
|
Plots the Pitch Class Transition Matrix (PCTM). |
|
|
Computes the Average Pitch Interval (PI) of a a list of musicaiz |
Harmony¶
|
Gets the chord type |
|
Get the intervals between pairs of notes (1st note and the rest of the notes in the note seq) of a sorted note seq. |
|
Predicts a chord in a note sequence with only note values, so no note durations atr taken into account. |
|
This method is similar to Scales.get_scales_degrees_from_chord method but in this case applied to an input note_seq, not to a chord. |
|
Get all possible scales and degrees from a chords list (chord progression) We retrieve a list of degrees which items correspond to one time step each. |
|
Uses predict_possible_progressions to predict all the scales and progressions that belong to a note_seq but this method only returns one of them. |
|
Returns a list of lists of all possible orders for a note seq. |
|
Removes a repeated note in a note_seq. |
|
Extracts the note positions in the chromatic scale of the notes in a notes sequence. |
|
Sorts a note seq (list of note objects) by the index of the notes in the chromatic scale. |
|
Computes the maximum number of notes that are overlapped in the harmonic axis. |
Rhythm¶
This submodule contains the implementation of part of the paper:
[1] Roig, C., Tardón, L. J., Barbancho, I., & Barbancho, A. M. (2014). Automatic melody composition based on a probabilistic model of music style and harmonic rules. Knowledge-Based Systems, 71, 419-434. http://dx.doi.org/10.1016/j.knosys.2014.08.018
The implementation follows the paper method to predict rhythmic patterns. This module contains:
- Tempo (or bpm) estimation
get IOIs with get_ioi method.
get error ej
- Time signature estimation
get the labeled beat vector (or IOI’) from IOIs with get_labeled_beat_vector
get the Bar Split Vectors (BSV) for each beat (k in the paper) with get_split_bar_vector.
k goes from 2 to 12 which are the most-common time_sig numerators. - compute the RSSM with each BSV with compute_rhythm_self_similarity_matrix. - get the time_sig numerator which will be the RSSM with the highest repeated bar instances.
Rhythm extraction
Pitch contour extraction
|
Extracts the time start of the notes in a notes sequence. |
|
get ioi of a list of time start values |
|
Convert IOI to the labeled beat vector (or IOI') |
This function computes the Rhythm Self-Similarity Matrix (RMSS). |
|
|
|
|
This function computes all the RMSS for time_sig numerators (k) 2 to 12 and outputs the beat (k) which will be the predicted time_sig numerator. |
|
Counts the total number of different note (NoteClassBase) classes. |
|
Uses get_note_classes to build a 1D list vector of 12 dimensions in which each element represents the counts of each note name in the chromatic scale of 12 notes. |
Computes the Note Length Transition MAtrix (NLTM) of a list of Note objects. |
|
Take into account that if you generated a NLTM with triplets = False then the triplet argument in this function will also be False. |
Self-Similarity Matrices¶
This submodule presents different implementations of self-similarity matrices.
The papers that are implemented in this sumbodule are the following:
[1] Louie, W. MusicPlot: Interactive Self-Similarity Matrix for Music Structure Visualization. https://wlouie1.github.io/MusicPlot/musicplot_paper.pdf
The process to obtain the SSM with this method is: 1. Group the notes in bars and subdivisions. 2. Extract the highest note in each subdivision. 3. Calculate m_prime = [p1-p2, d2/d1, …] with p the pitch and d the note duration. 4. Compute the SSM function.
|
Computes the selected SSM. |
|
Computes the SSM with Louie method: https://wlouie1.github.io/MusicPlot/musicplot_paper.pdf |
|
Computes the SSM with the selected measure for all the bars. |
|
Computes the SSM with the selected measure for all the bars. |
|
Plots a SSM. |
|
Converts a feature vector with the highest notes per subdivision and bar in a Self-Similarity Matrix with the cosine distance. |
|
|
|
Computes the m_prim vector which calculates the difference between 2 consecutive note's pitches and the division of the note's durations. |
|
Computes the novelty function of a SSM. |
|
Gets the segment boundaries of a SSM. |
|
Plots the novelty curve from a SSM. |
Graphs¶
This submodule presents different implementations of self-similarity matrices.
The papers that are implemented in this sumbodule are the following:
[1] Jeong, D., Kwon, T., Kim, Y., & Nam, J. (2019) Graph neural network for music score data and modeling expressive piano performance. In International Conference on Machine Learning, 3060-3070 https://proceedings.mlr.press/v97/jeong19a.html
|
Converts a Musa object into a Graph where nodes are the notes and edges are connections between notes. |
|
Plots a graph with matplotlib. |
Structure¶
This submodule segments symbolic music in its form or structure.
|
Get the note indexes where a section ends. |
|
Get the beat indexes where a section ends. |