Uhh... I want to understand what changes happens on musical instruments when they are played at different levels of intesity so I can synthesize their nuances on comercial synthesizers.
And for this I remember that each sound is made of a fundamental frequency and other partials and the different combinations of partial frequencies and their levels make us notice the differences of each timbre.
Then I want to isolate samples of various instruments on various levels of intensity and decompose each frequency to check what happens when there's a raise on intensity so I can try to use this information to reproduce these nuances my synthesizers.
But I still don't know exactly from where to start, from an ingenuous point of view, I would play these samples on FL studio, separate the frequencies with a simple EQ, render the files and then try to use this info it to simulate the instruments on synths.
But I guess that something better can be made. Some kind of frequency decomposition or some other term I might not know.
I'm trying to do something with FL Studio, Mathematica and Matlab, any tips?