Hi all,
This topic isn't about pd per-say, but I bet there are many on this list that know about it, so it was the best place I could think of to ask.
Is anyone aware of studies, articles, or other research that has been conducted around the amplitude envelopes of real world music instruments (strings, brass, voice, etc) during performance? What I mean is, I've seen a few articles here and there that explain how the envelopes differ between musical instruments (such as figure 5 of this article http://interscience.in/IJCSI_Vol2Iss1/IJCSI_Paper_4.pdf), but the source analysis samples are too simple for musical usefulness, usually a single note recorded with basic dynamics. What I'd like to find is more how the ADSR curves look for actual musical phrases.
The end result would be applied to a composition tool I'm working on that provides control over the envelopes of the notes. Right now I'm going with the standard exponential attack/decay/release, as found in analog synths, but I've always wondered what these curves look like in acoustic instruments and real world situations..
Or perhaps if there is a better place to ask that someone can think of, I'd be happy to post my question there. Its not easy for find recordings to do the analysis and it would be quite time consuming to produce anything reliable, but I figure there's gotta be people out there that have done immense work in this topic.
cheers, Rich
The amount of real life envelopes is close to endless when you're looking at performaces of good musicians. That's one reason why good musicians cannot be replaced by machines.
The best way to find out is playing as many instruments as you can - spending some serious time.
Second best thing would be recording good musicians in various musical contexts.
The last version would be looking at some decent sample libraries if you happen to have acces to them - those are usually very expensive.
From my experience of building realtime playable instruments I can tell that
the sample libraries represent only a very small portion of tha available articulations / envelopes.
Ingo
Von: Pd-list [mailto:pd-list-bounces@lists.iem.at] Im Auftrag von Rich Eakin Gesendet: Mittwoch, 5. November 2014 20:15 An: pd-list@lists.iem.at Betreff: [PD] Looking for research in amplitude envelopes of music instruments during performance.
Hi all,
This topic isn't about pd per-say, but I bet there are many on this list that know about it, so it was the best place I could think of to ask.
Is anyone aware of studies, articles, or other research that has been conducted around the amplitude envelopes of real world music instruments (strings, brass, voice, etc) during performance? What I mean is, I've seen a few articles here and there that explain how the envelopes differ between musical instruments (such as figure 5 of this article), but the source analysis samples are too simple for musical usefulness, usually a single note recorded with basic dynamics. What I'd like to find is more how the ADSR curves look for actual musical phrases.
The end result would be applied to a composition tool I'm working on that provides control over the envelopes of the notes. Right now I'm going with the standard exponential attack/decay/release, as found in analog synths, but I've always wondered what these curves look like in acoustic instruments and real world situations..
Or perhaps if there is a better place to ask that someone can think of, I'd be happy to post my question there. Its not easy for find recordings to do the analysis and it would be quite time consuming to produce anything reliable, but I figure there's gotta be people out there that have done immense work in this topic.
cheers, Rich
Hi Rich,
Maybe you can consider taking another approach - not about the envelopes but how the sound is made. Working from an algorhitm based on the physics of the instrument and the player and listen (with your ears) how musicians use things as embouchure, trills & frills and dynamics. Like said by Ingo - there is no one way to it. To take brass instruments as an example; a jazz musician will probably play different than a Bulgarian folk musician or a Baroque musician. Different styles of music ask for different styles of playing and thus envelopes.
Cheers, Wilfred KlankOntwerp
Ingo schreef op 05-11-14 20:52:
The amount of real life envelopes is close to endless when you're looking at performaces of good musicians. That's one reason why good musicians cannot be replaced by machines.
The best way to find out is playing as many instruments as you can - spending some serious time.
Second best thing would be recording good musicians in various musical contexts.
The last version would be looking at some decent sample libraries if you happen to have acces to them - those are usually very expensive.
From my experience of building realtime playable instruments I can tell that the sample libraries represent only a very small portion of tha available articulations / envelopes.
Ingo
Von: Pd-list [mailto:pd-list-bounces@lists.iem.at] Im Auftrag von Rich Eakin Gesendet: Mittwoch, 5. November 2014 20:15 An: pd-list@lists.iem.at Betreff: [PD] Looking for research in amplitude envelopes of music instruments during performance.
Hi all,
This topic isn't about pd per-say, but I bet there are many on this list that know about it, so it was the best place I could think of to ask.
Is anyone aware of studies, articles, or other research that has been conducted around the amplitude envelopes of real world music instruments (strings, brass, voice, etc) during performance? What I mean is, I've seen a few articles here and there that explain how the envelopes differ between musical instruments (such as figure 5 of this article), but the source analysis samples are too simple for musical usefulness, usually a single note recorded with basic dynamics. What I'd like to find is more how the ADSR curves look for actual musical phrases.
The end result would be applied to a composition tool I'm working on that provides control over the envelopes of the notes. Right now I'm going with the standard exponential attack/decay/release, as found in analog synths, but I've always wondered what these curves look like in acoustic instruments and real world situations..
Or perhaps if there is a better place to ask that someone can think of, I'd be happy to post my question there. Its not easy for find recordings to do the analysis and it would be quite time consuming to produce anything reliable, but I figure there's gotta be people out there that have done immense work in this topic.
cheers, Rich
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Thanks for the advice, though I am actually looking for existing research on the topic, and I find it a bit surprising how little there exists considering how much amplitude envelopes contribute to the timbre of a sound. I do conduct my own research, mainly through transcription and reproduction, but I'm of the mind that one should try to at least be aware of existing research.
Thanks again, Rich
Hello Rich,
There must be something along these lines within the IRCAM archives - a quick 'envelope' search pulled up several interesting results.
http://www.ircam.fr/26.html?L=1
Regards,
Julian
On 6 November 2014 20:53, Rich Eakin rtepub@gmail.com wrote:
Thanks for the advice, though I am actually looking for existing research on the topic, and I find it a bit surprising how little there exists considering how much amplitude envelopes contribute to the timbre of a sound. I do conduct my own research, mainly through transcription and reproduction, but I'm of the mind that one should try to at least be aware of existing research.
Thanks again, Rich
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
"R" == Rich Eakin rtepub@gmail.com writes:
R> What I'd like to find is more how the ADSR curves look for actual
R> musical phrases.
Hi Rich.
Bill Schottstaedt set up a wonderful hands-on about extracting envelopes (amp, pitch) from recordings of birdsongs, getting great results in the synthesis:
https://ccrma.stanford.edu/software/snd/snd/sndscm.html#animalsdoc
I've used the same approach on recordings of more complex natural environments, but the method would be useful for recordings of notes or phrases played on musical instruments.
Cheers,
-anders
The amount of real life envelopes is close to endless when you're looking
at performaces of good musicians. That's one reason why good musicians cannot be replaced by machines.
pretty sure machines can create a close to endless amount of envelopes too. It's just that nobody has taken enough care to start programming them that way.
On Fri, Nov 7, 2014 at 8:27 PM, anders.vinjar@bek.no wrote:
"R" == Rich Eakin rtepub@gmail.com writes:
R> What I'd like to find is more how the ADSR curves look for actual R> musical phrases.
Hi Rich.
Bill Schottstaedt set up a wonderful hands-on about extracting envelopes (amp, pitch) from recordings of birdsongs, getting great results in the synthesis:
https://ccrma.stanford.edu/software/snd/snd/sndscm.html#animalsdoc
I've used the same approach on recordings of more complex natural environments, but the method would be useful for recordings of notes or phrases played on musical instruments.
Cheers,
-anders
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
The big difference is how humans and machines are making the choices. Machines can create any frequency and combine all kind of frequencies in millions of ways. Does that make it music?
BTW, you shouldn't limit envelopes to volume. Sound, frequencies, purity (like noise vs. defined frequencies), things like vibrato envelopes, changes in the noise amount and type, etc. have envelopes on every note or the structure of parts or the entire piece of music.
Looking at waveform envelopes describes just a small percentage of what you can actually hear in a single note of music.
Ingo
Von: Pd-list [mailto:pd-list-bounces@lists.iem.at] Im Auftrag von i go bananas Gesendet: Freitag, 7. November 2014 12:43 An: anders.vinjar@bek.no Cc: PD List Betreff: Re: [PD] Looking for research in amplitude envelopes of music instruments during performance.
The amount of real life envelopes is close to endless when you're looking
at performaces of good musicians. That's one reason why good musicians cannot be replaced by machines.
pretty sure machines can create a close to endless amount of envelopes too. It's just that nobody has taken enough care to start programming them that way.
On Fri, Nov 7, 2014 at 8:27 PM, anders.vinjar@bek.no wrote:
"R" == Rich Eakin rtepub@gmail.com writes:
R> What I'd like to find is more how the ADSR curves look for actual R> musical phrases.
Hi Rich.
Bill Schottstaedt set up a wonderful hands-on about extracting envelopes (amp, pitch) from recordings of birdsongs, getting great results in the synthesis:
https://ccrma.stanford.edu/software/snd/snd/sndscm.html#animalsdoc
I've used the same approach on recordings of more complex natural environments, but the method would be useful for recordings of notes or phrases played on musical instruments.
Cheers,
-anders
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On 08/11/14 00:00, Ingo wrote:
The big difference is how humans and machines are making the choices. Machines can create any frequency and combine all kind of frequencies in millions of ways. Does that make it music?
Most music is made by humans using machines ... machines made of wood, metal, leather or, in the case of pd or synthesisers ... silicon chips. Singing and clapping are obvious exceptions here.
Some music has the human choice limited to the original design of the instrument ... wind chimes and algorithmic music come to mind ... you may consider these a lesser form of music I guess, it depends .. does a composer make music, or is it only music if a musician actually plays it? I'll stick with the idea that if it is composed to be music it is ... but that doesn't make it good music, that is almost pointless to define except as a personal preference.
Simon
Absolutely!
-----Ursprüngliche Nachricht----- Von: Pd-list [mailto:pd-list-bounces@lists.iem.at] Im Auftrag von Simon Wise Gesendet: Samstag, 8. November 2014 03:25 An: pd-list@lists.iem.at Betreff: Re: [PD] Looking for research in amplitude envelopes of music instruments during performance.
On 08/11/14 00:00, Ingo wrote:
The big difference is how humans and machines are making the choices. Machines can create any frequency and combine all kind of frequencies in millions of ways. Does that make it music?
Most music is made by humans using machines ... machines made of wood, metal, leather or, in the case of pd or synthesisers ... silicon chips. Singing and clapping are obvious exceptions here.
Some music has the human choice limited to the original design of the instrument ... wind chimes and algorithmic music come to mind ... you may consider these a lesser form of music I guess, it depends .. does a composer make music, or is it only music if a musician actually plays it? I'll stick with the idea that if it is composed to be music it is ... but that doesn't make it good music, that is almost pointless to define except as a personal preference.
Simon
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Fri, Nov 7, 2014 at 12:27 PM, anders.vinjar@bek.no wrote:
"R" == Rich Eakin rtepub@gmail.com writes:
R> What I'd like to find is more how the ADSR curves look for actual R> musical phrases.
Hi Rich.
Bill Schottstaedt set up a wonderful hands-on about extracting envelopes (amp, pitch) from recordings of birdsongs, getting great results in the synthesis:
https://ccrma.stanford.edu/software/snd/snd/sndscm.html#animalsdoc
I've used the same approach on recordings of more complex natural environments, but the method would be useful for recordings of notes or phrases played on musical instruments.
Thanks! It's nice to see how this is done with CLM, and a reminder of how involved the process is before jumping into it. :) And its even more involved if you can't get the target source isolated beforehand (as is the case of most of the sounds I'd like to transcribe).
I searched the IRCAM archives, though unfortunately looking for 'envelope' tends to produce things related to spectral envelopes, not what I'm addressing at the moment.
I'd like to ask that if others wish to debate the philisophical merits of computer versus human music performance, please start a new thread. :)
cheers, Rich