Date: Sat, 12 Jul 2008 13:56:54 +0200 From: Damian Stewart damian@frey.co.nz Subject: [PD] how to avoid (most/many/some) readsf~ dropouts To: PD-List pd-list@iem.at Message-ID: 48789C06.4030401@frey.co.nz Content-Type: text/plain; charset=ISO-8859-1; format=flowed
hey,
one thing i've noticed with readsf~ using it in my own live-performance sets is that doing this
[symbol blahblahblah.aif] | [open $1, bang( | [readsf~]
sometimes causes dropouts. but if you go
[symbol blahblahblah.aif] | [t b a] | ____ |
[del 50] [open $1( | __________/ |/ [readsf~]then you remove (all/many/some) of the dropouts. i haven't extensively tested this, but anecdotal evidence seems to suggest that it works.
I think this is because the former method may not let the buffer fill before you start playing. I usually don't like the second method, either, because of the 50 ms delay (which isn't much, but I'm finicky about that kind of thing). Depending on the setup, I usually prefer something like the following:
[loadbang]
|
| [r open]
| /
[open file.ext (
[0 ( [1 ( | | __ | [t b f ]_____\ | | [readsf~ ] [s open] | [s open]
I hope things line up okay there... at any rate, you load the buffer at the beginning, or if your piece/performance has a master clock, a couple of seconds before you need it. Then for rehearsal, if you need to stop the file you can use a trigger to reload it immediately. Also, if you need to play the file again, you can send the "open" message when the file is done playing. This makes the whole process a little more "front-loaded," so that the soundfile is always "open" no matter what. My intuition is that it's more robust than other schemes I've tried. I haven't tested it very hard, though, as I've only ever needed up to 9-10 simultaneous 96k files... maybe it would be neat to find some old, slow hardware to see how far different methods can be pushed. =o)
More advanced for stopping would be something that fades out the sound with a [line~] over 20 ms or so, and then sends the stop message to [readsf~] after a comparable delay -- that way you won't get an annoying transient upon stop -- same deal with a fade in if you are starting in the middle of the soundfile, but without the delay. I usually wrap the whole thing in an abstraction -- I keep three or four different ones lying around with different properties for different situations, and I'm happy to share with anyone who might find them useful.
Thanks,
Matt
Matt Barber wrote:
Date: Sat, 12 Jul 2008 13:56:54 +0200 From: Damian Stewart damian@frey.co.nz Subject: [PD] how to avoid (most/many/some) readsf~ dropouts To: PD-List pd-list@iem.at Message-ID: 48789C06.4030401@frey.co.nz Content-Type: text/plain; charset=ISO-8859-1; format=flowed
hey,
one thing i've noticed with readsf~ using it in my own live-performance sets is that doing this
[symbol blahblahblah.aif] | [open $1, bang( | [readsf~]
sometimes causes dropouts. but if you go
[symbol blahblahblah.aif] | [t b a] | ____ |
[del 50] [open $1( | __________/ |/ [readsf~]then you remove (all/many/some) of the dropouts. i haven't extensively tested this, but anecdotal evidence seems to suggest that it works.
I think this is because the former method may not let the buffer fill before you start playing. I usually don't like the second method, either, because of the 50 ms delay (which isn't much, but I'm finicky about that kind of thing). Depending on the setup, I usually prefer something like the following:
[loadbang] | | [r open] | / [open file.ext (
[0 ( [1 ( | | __ | [t b f ]_____\ | | [readsf~ ] [s open] | [s open]
I hope things line up okay there... at any rate, you load the buffer at the beginning, or if your piece/performance has a master clock, a couple of seconds before you need it. Then for rehearsal, if you need to stop the file you can use a trigger to reload it immediately. Also, if you need to play the file again, you can send the "open" message when the file is done playing. This makes the whole process a little more "front-loaded," so that the soundfile is always "open" no matter what. My intuition is that it's more robust than other schemes I've tried. I haven't tested it very hard, though, as I've only ever needed up to 9-10 simultaneous 96k files... maybe it would be neat to find some old, slow hardware to see how far different methods can be pushed. =o)
More advanced for stopping would be something that fades out the sound with a [line~] over 20 ms or so, and then sends the stop message to [readsf~] after a comparable delay -- that way you won't get an annoying transient upon stop -- same deal with a fade in if you are starting in the middle of the soundfile, but without the delay. I usually wrap the whole thing in an abstraction -- I keep three or four different ones lying around with different properties for different situations, and I'm happy to share with anyone who might find them useful.
Thanks,
Matt
Matt and Damian,
In working on a recent piece for marimba and 8-channel computer (24bit/88.2kHz), I consistently experienced intermittent clicks and audio dropouts -- even on high-end hardware running GNU/Linux. Increasing the readsf buffer and the time between file load (open) and playback (readsf) helped some, but not enough. Since the premiere last month, I've been rebuilding the abstractions for better efficiency, but am still not happy with the results -- and before the soloist can safely tour the piece, I need to work out a more robust solution.
I've attached the latest version of my "basic" playback (w/fade) patch for suggestions/comments... Unfortunately, your ascii patch didn't line up, would you mind posting an example patch that shows your method?
Best, G
Hey Greg,
I threw together a couple of abstractions to show how I usually do it, but I'm not sure it will help with playback of multiple 8-channel files. I've never tried this before, but I wonder what would happen if you split your 8-channel files and played them simultaneously as 4-channel soundfiles... or, if it's always running on your computer, you might try storing half of your soundfiles on one disk and the other half on another -- it's hard to know whether the disk is going bonkers or readsf~ breaks. Depending on the size of your files you might bite the bullet and load them into tables...
At any rate, you should have three files -- one is a generic abstraction that shows the general method. The other abstraction (playback_8ch_fade) is based on your first patch, and is generalized to play any 8-channel soundfile -- hopefully the comments in them are useful. I kept your 6<n<56 ms "humanization" in the file, as well as the throws to ch1-ch8 (I'd probably use outlet~ more often than not, but this is fine if you know how you need to set up the rest of the patch).
It uses the same general method as the generic patch, but adds some other goodies I would feel obligated to provide if I were giving the abstraction to someone to use... but maybe it's way overkill for personal use, or inside a patch where nobody's gonna see it. It has some basic type-checking and conversion, but no error printing, which I would normally do if I had the time or were building a library. It also has a small example of some dynamic patching, which might better be left out, and could maybe even be avoided in this example (nothing comes to mind instantly)... I like having the option to change things on the fly, though, so I use this kind of thing in my own patches all the time. Let me know if it's even readable. The third file (marked "revised") is an example of how to use the bigger abstraction. I haven't fully debugged it all, and lots of optimizations could be made all over the place but I think it should work as an example patch.
Of course others are welcome to comment if it sucks, or use any of it if they find it compelling. =o) Let me know if this helps out, but I've a feeling your problem is deeper than any method for using [readsf~].
Matt
On Mon, Jul 14, 2008 at 12:51 PM, Dr. Greg Wilder gregwilder@orpheusmediaresearch.com wrote:
Matt Barber wrote:
Date: Sat, 12 Jul 2008 13:56:54 +0200 From: Damian Stewart damian@frey.co.nz Subject: [PD] how to avoid (most/many/some) readsf~ dropouts To: PD-List pd-list@iem.at Message-ID: 48789C06.4030401@frey.co.nz Content-Type: text/plain; charset=ISO-8859-1; format=flowed
hey,
one thing i've noticed with readsf~ using it in my own live-performance sets is that doing this
[symbol blahblahblah.aif] | [open $1, bang( | [readsf~]
sometimes causes dropouts. but if you go
[symbol blahblahblah.aif] | [t b a] | ____ |
[del 50] [open $1( | __________/ |/ [readsf~]then you remove (all/many/some) of the dropouts. i haven't extensively tested this, but anecdotal evidence seems to suggest that it works.
I think this is because the former method may not let the buffer fill before you start playing. I usually don't like the second method, either, because of the 50 ms delay (which isn't much, but I'm finicky about that kind of thing). Depending on the setup, I usually prefer something like the following:
[loadbang] | | [r open] | / [open file.ext (
[0 ( [1 ( | | __ | [t b f ]_____\ | | [readsf~ ] [s open] | [s open]
I hope things line up okay there... at any rate, you load the buffer at the beginning, or if your piece/performance has a master clock, a couple of seconds before you need it. Then for rehearsal, if you need to stop the file you can use a trigger to reload it immediately. Also, if you need to play the file again, you can send the "open" message when the file is done playing. This makes the whole process a little more "front-loaded," so that the soundfile is always "open" no matter what. My intuition is that it's more robust than other schemes I've tried. I haven't tested it very hard, though, as I've only ever needed up to 9-10 simultaneous 96k files... maybe it would be neat to find some old, slow hardware to see how far different methods can be pushed. =o)
More advanced for stopping would be something that fades out the sound with a [line~] over 20 ms or so, and then sends the stop message to [readsf~] after a comparable delay -- that way you won't get an annoying transient upon stop -- same deal with a fade in if you are starting in the middle of the soundfile, but without the delay. I usually wrap the whole thing in an abstraction -- I keep three or four different ones lying around with different properties for different situations, and I'm happy to share with anyone who might find them useful.
Thanks,
Matt
Matt and Damian,
In working on a recent piece for marimba and 8-channel computer (24bit/88.2kHz), I consistently experienced intermittent clicks and audio dropouts -- even on high-end hardware running GNU/Linux. Increasing the readsf buffer and the time between file load (open) and playback (readsf) helped some, but not enough. Since the premiere last month, I've been rebuilding the abstractions for better efficiency, but am still not happy with the results -- and before the soloist can safely tour the piece, I need to work out a more robust solution.
I've attached the latest version of my "basic" playback (w/fade) patch for suggestions/comments... Unfortunately, your ascii patch didn't line up, would you mind posting an example patch that shows your method?
Best, G
-- http://www.orpheusmediaresearch.com/ http://www.gregwilder.com/ +1 215-764-6057 (office) +1 215-205-2893 (cell)
#N canvas 65 224 902 519 10; #X msg 86 198 1; #X msg 129 198 0; #X obj 175 128 bng 15 250 50 0 empty empty empty 0 -6 0 8 -262144 -1 -1; #X obj 129 178 r stop; #X obj 86 105 + 6; #X floatatom 86 125 5 0 0 0 - - -; #X obj 175 373 *~; #X obj 383 147 loadbang; #X obj 441 147 r reset_vol; #X text 401 127 master volume; #X obj 86 84 random 50; #X obj 243 373 *~; #X obj 174 395 throw~ ch1; #X obj 242 395 throw~ ch2; #X obj 312 373 *~; #X obj 380 373 *~; #X obj 311 395 throw~ ch3; #X obj 379 395 throw~ ch4; #X obj 574 145 delay 500; #X msg 383 168 100; #X msg 441 168 100; #X obj 667 241 $1; #X obj 395 351 vline~; #X obj 328 351 vline~; #X obj 258 351 vline~; #X obj 191 351 vline~; #X obj 86 143 delay; #X obj 448 373 *~; #X obj 516 373 *~; #X obj 584 373 *~; #X obj 574 224 dbtorms; #X floatatom 574 185 5 0 0 0 - - -; #X obj 574 203 * 1; #X obj 652 373 *~; #X obj 667 351 vline~; #X obj 600 351 vline~; #X obj 531 351 vline~; #X obj 464 351 vline~; #X text 599 204 (vol%); #X obj 447 395 throw~ ch5; #X obj 515 395 throw~ ch6; #X obj 583 395 throw~ ch7; #X obj 651 395 throw~ ch8; #X msg 175 151 open 8ch_acheron_blasts.wav; #X msg 574 165 0; #X msg 667 262 $1 3500; #X obj 667 283 unpack f f; #X obj 724 304 + 1025; #X text 719 262 fade time in ms; #X floatatom 724 325 5 0 0 0 - - -; #X obj 441 189 delay; #X obj 441 209 bng 15 250 50 0 empty empty empty 0 -6 0 10 -262144 -1 -1; #X obj 175 67 r acheron_play; #X obj 574 124 r acheron_fade; #X obj 175 215 readsf~ 8 1e+06; #X connect 0 0 54 0; #X connect 1 0 54 0; #X connect 2 0 43 0; #X connect 2 0 26 0; #X connect 3 0 1 0; #X connect 4 0 5 0; #X connect 5 0 26 0; #X connect 6 0 12 0; #X connect 7 0 19 0; #X connect 8 0 20 0; #X connect 10 0 4 0; #X connect 11 0 13 0; #X connect 14 0 16 0; #X connect 15 0 17 0; #X connect 18 0 44 0; #X connect 19 0 31 0; #X connect 20 0 31 0; #X connect 21 0 45 0; #X connect 22 0 15 1; #X connect 23 0 14 1; #X connect 24 0 11 1; #X connect 25 0 6 1; #X connect 26 0 0 0; #X connect 27 0 39 0; #X connect 28 0 40 0; #X connect 29 0 41 0; #X connect 30 0 21 0; #X connect 31 0 32 0; #X connect 32 0 30 0; #X connect 33 0 42 0; #X connect 34 0 33 1; #X connect 35 0 29 1; #X connect 36 0 28 1; #X connect 37 0 27 1; #X connect 43 0 54 0; #X connect 44 0 31 0; #X connect 45 0 22 0; #X connect 45 0 23 0; #X connect 45 0 24 0; #X connect 45 0 25 0; #X connect 45 0 37 0; #X connect 45 0 36 0; #X connect 45 0 35 0; #X connect 45 0 34 0; #X connect 45 0 46 0; #X connect 46 1 47 0; #X connect 47 0 49 0; #X connect 49 0 50 1; #X connect 50 0 20 0; #X connect 50 0 51 0; #X connect 51 0 1 0; #X connect 52 0 10 0; #X connect 52 0 2 0; #X connect 53 0 18 0; #X connect 53 0 50 0; #X connect 54 0 6 0; #X connect 54 1 11 0; #X connect 54 2 14 0; #X connect 54 3 15 0; #X connect 54 4 27 0; #X connect 54 5 28 0; #X connect 54 6 29 0; #X connect 54 7 33 0;
Matt Barber wrote:
Hey Greg,
I wonder what would happen if you split your 8-channel files and played them simultaneously as 4-channel soundfiles...
Yeah, this is how the piece was originally created -- the idea being, if all "mixing" happened in real time, it would be easier to effect musical changes during rehearsal.
or, if it's always running on your computer, you might try storing half of your soundfiles on one disk and the other half on another -- it's hard to know whether the disk is going bonkers or readsf~ breaks.
Good thinking. The soloist is considering an upgrade to dual 10,000rpm SCSI drives -- a wise move given that mine is not be the only work in his repertoire requiring this level of computer performance.
Depending on the size of your files you might bite the bullet and load them into tables...
I tried this, but it only works in a few cases since the files are generally too large for tables.
At any rate, you should have three files -- one is a generic abstraction that shows the general method. The other abstraction (playback_8ch_fade) is based on your first patch, and is generalized to play any 8-channel soundfile -- hopefully the comments in them are useful.
This is *extremely* helpful. My previous approach involved the creation of unique abstractions for slightly modified instances. Your "all in one" is far more flexible and better suited to real world use.
I kept your 6<n<56 ms "humanization" in the file, as well as the throws to ch1-ch8 (I'd probably use outlet~ more often than not, but this is fine if you know how you need to set up the rest of the patch).
Yup, you hit the "humanization" nail on the head. In certain sections of the piece, the computer builds complete musical gestures in real time. Depending on where it is in the score, the patch chooses the appropriate soundfile type and selects a specific file to trigger from a predetermined list wherein all files are similar, but never identical. (For obvious reasons, subtle volume and timbre changes are important when attempting to create an organic and richly varied performance environment). Randomizing start times between 6 and 56 ms seems to provide a natural "ensemble" feel in these instances.
It uses the same general method as the generic patch, but adds some other goodies I would feel obligated to provide if I were giving the abstraction to someone to use... but maybe it's way overkill for personal use, or inside a patch where nobody's gonna see it. It has some basic type-checking and conversion, but no error printing, which I would normally do if I had the time or were building a library. It also has a small example of some dynamic patching, which might better be left out, and could maybe even be avoided in this example (nothing comes to mind instantly)... I like having the option to change things on the fly, though, so I use this kind of thing in my own patches all the time. Let me know if it's even readable. The third file (marked "revised") is an example of how to use the bigger abstraction. I haven't fully debugged it all, and lots of optimizations could be made all over the place but I think it should work as an example patch.
Fantastic. A wonderful abstraction tutorial. I'll be sure to post my "final solution" to the list once I find what works best.
Of course others are welcome to comment if it sucks, or use any of it if they find it compelling. =o) Let me know if this helps out, but I've a feeling your problem is deeper than any method for using [readsf~].
Agreed. And I'm surprised there aren't others running into this problem with 8-channel 88.2/24 interactive patches like this.
Of course, the 8-channel environment is useful for its ambisonic and other spatialization potential, and one solution that works well (for certain musical situations) is to spatialize monophonic soundfiles in real time. This is a great solution for reducing performance demand on the hard drive, but quickly becomes expensive in CPU cycles...
Best, G
On Mon, Jul 14, 2008 at 12:51 PM, Dr. Greg Wilder gregwilder@orpheusmediaresearch.com wrote:
Matt Barber wrote:
Date: Sat, 12 Jul 2008 13:56:54 +0200 From: Damian Stewart damian@frey.co.nz Subject: [PD] how to avoid (most/many/some) readsf~ dropouts To: PD-List pd-list@iem.at Message-ID: 48789C06.4030401@frey.co.nz Content-Type: text/plain; charset=ISO-8859-1; format=flowed
hey,
one thing i've noticed with readsf~ using it in my own live-performance sets is that doing this
[symbol blahblahblah.aif] | [open $1, bang( | [readsf~]
sometimes causes dropouts. but if you go
[symbol blahblahblah.aif] | [t b a] | ____ |
[del 50] [open $1( | __________/ |/ [readsf~]then you remove (all/many/some) of the dropouts. i haven't extensively tested this, but anecdotal evidence seems to suggest that it works.
I think this is because the former method may not let the buffer fill before you start playing. I usually don't like the second method, either, because of the 50 ms delay (which isn't much, but I'm finicky about that kind of thing). Depending on the setup, I usually prefer something like the following:
[loadbang] | | [r open] | / [open file.ext (
[0 ( [1 ( | | __ | [t b f ]_____\ | | [readsf~ ] [s open] | [s open]
I hope things line up okay there... at any rate, you load the buffer at the beginning, or if your piece/performance has a master clock, a couple of seconds before you need it. Then for rehearsal, if you need to stop the file you can use a trigger to reload it immediately. Also, if you need to play the file again, you can send the "open" message when the file is done playing. This makes the whole process a little more "front-loaded," so that the soundfile is always "open" no matter what. My intuition is that it's more robust than other schemes I've tried. I haven't tested it very hard, though, as I've only ever needed up to 9-10 simultaneous 96k files... maybe it would be neat to find some old, slow hardware to see how far different methods can be pushed. =o)
More advanced for stopping would be something that fades out the sound with a [line~] over 20 ms or so, and then sends the stop message to [readsf~] after a comparable delay -- that way you won't get an annoying transient upon stop -- same deal with a fade in if you are starting in the middle of the soundfile, but without the delay. I usually wrap the whole thing in an abstraction -- I keep three or four different ones lying around with different properties for different situations, and I'm happy to share with anyone who might find them useful.
Thanks,
Matt
Matt and Damian,
In working on a recent piece for marimba and 8-channel computer (24bit/88.2kHz), I consistently experienced intermittent clicks and audio dropouts -- even on high-end hardware running GNU/Linux. Increasing the readsf buffer and the time between file load (open) and playback (readsf) helped some, but not enough. Since the premiere last month, I've been rebuilding the abstractions for better efficiency, but am still not happy with the results -- and before the soloist can safely tour the piece, I need to work out a more robust solution.
I've attached the latest version of my "basic" playback (w/fade) patch for suggestions/comments... Unfortunately, your ascii patch didn't line up, would you mind posting an example patch that shows your method?
Best, G
-- http://www.orpheusmediaresearch.com/ http://www.gregwilder.com/ +1 215-764-6057 (office) +1 215-205-2893 (cell)
#N canvas 65 224 902 519 10; #X msg 86 198 1; #X msg 129 198 0; #X obj 175 128 bng 15 250 50 0 empty empty empty 0 -6 0 8 -262144 -1 -1; #X obj 129 178 r stop; #X obj 86 105 + 6; #X floatatom 86 125 5 0 0 0 - - -; #X obj 175 373 *~; #X obj 383 147 loadbang; #X obj 441 147 r reset_vol; #X text 401 127 master volume; #X obj 86 84 random 50; #X obj 243 373 *~; #X obj 174 395 throw~ ch1; #X obj 242 395 throw~ ch2; #X obj 312 373 *~; #X obj 380 373 *~; #X obj 311 395 throw~ ch3; #X obj 379 395 throw~ ch4; #X obj 574 145 delay 500; #X msg 383 168 100; #X msg 441 168 100; #X obj 667 241 $1; #X obj 395 351 vline~; #X obj 328 351 vline~; #X obj 258 351 vline~; #X obj 191 351 vline~; #X obj 86 143 delay; #X obj 448 373 *~; #X obj 516 373 *~; #X obj 584 373 *~; #X obj 574 224 dbtorms; #X floatatom 574 185 5 0 0 0 - - -; #X obj 574 203 * 1; #X obj 652 373 *~; #X obj 667 351 vline~; #X obj 600 351 vline~; #X obj 531 351 vline~; #X obj 464 351 vline~; #X text 599 204 (vol%); #X obj 447 395 throw~ ch5; #X obj 515 395 throw~ ch6; #X obj 583 395 throw~ ch7; #X obj 651 395 throw~ ch8; #X msg 175 151 open 8ch_acheron_blasts.wav; #X msg 574 165 0; #X msg 667 262 $1 3500; #X obj 667 283 unpack f f; #X obj 724 304 + 1025; #X text 719 262 fade time in ms; #X floatatom 724 325 5 0 0 0 - - -; #X obj 441 189 delay; #X obj 441 209 bng 15 250 50 0 empty empty empty 0 -6 0 10 -262144 -1 -1; #X obj 175 67 r acheron_play; #X obj 574 124 r acheron_fade; #X obj 175 215 readsf~ 8 1e+06; #X connect 0 0 54 0; #X connect 1 0 54 0; #X connect 2 0 43 0; #X connect 2 0 26 0; #X connect 3 0 1 0; #X connect 4 0 5 0; #X connect 5 0 26 0; #X connect 6 0 12 0; #X connect 7 0 19 0; #X connect 8 0 20 0; #X connect 10 0 4 0; #X connect 11 0 13 0; #X connect 14 0 16 0; #X connect 15 0 17 0; #X connect 18 0 44 0; #X connect 19 0 31 0; #X connect 20 0 31 0; #X connect 21 0 45 0; #X connect 22 0 15 1; #X connect 23 0 14 1; #X connect 24 0 11 1; #X connect 25 0 6 1; #X connect 26 0 0 0; #X connect 27 0 39 0; #X connect 28 0 40 0; #X connect 29 0 41 0; #X connect 30 0 21 0; #X connect 31 0 32 0; #X connect 32 0 30 0; #X connect 33 0 42 0; #X connect 34 0 33 1; #X connect 35 0 29 1; #X connect 36 0 28 1; #X connect 37 0 27 1; #X connect 43 0 54 0; #X connect 44 0 31 0; #X connect 45 0 22 0; #X connect 45 0 23 0; #X connect 45 0 24 0; #X connect 45 0 25 0; #X connect 45 0 37 0; #X connect 45 0 36 0; #X connect 45 0 35 0; #X connect 45 0 34 0; #X connect 45 0 46 0; #X connect 46 1 47 0; #X connect 47 0 49 0; #X connect 49 0 50 1; #X connect 50 0 20 0; #X connect 50 0 51 0; #X connect 51 0 1 0; #X connect 52 0 10 0; #X connect 52 0 2 0; #X connect 53 0 18 0; #X connect 53 0 50 0; #X connect 54 0 6 0; #X connect 54 1 11 0; #X connect 54 2 14 0; #X connect 54 3 15 0; #X connect 54 4 27 0; #X connect 54 5 28 0; #X connect 54 6 29 0; #X connect 54 7 33 0;
Dr. Greg Wilder wrote:
Matt Barber wrote:
Hey Greg,
I wonder what would happen if you split your 8-channel files and played them simultaneously as 4-channel soundfiles...
how long are the files? if they're not so long, and you've got a computer with a lot of memory, you might be able to load them all to RAM. if they're, say, 10 minutes each, and you have 8, then thats 10*60*44100*8 channels * 4 bytes (32 bit sample data) = 810MB of data. hrm. perhaps not..
(you can use [soundfiler]'s -maxsize argument to allow it to load enormous files without any problems - just say -maxsize <some enormous number>)
Hallo, Damian Stewart hat gesagt: // Damian Stewart wrote:
Dr. Greg Wilder wrote:
Matt Barber wrote:
Hey Greg,
I wonder what would happen if you split your 8-channel files and played them simultaneously as 4-channel soundfiles...
how long are the files? if they're not so long, and you've got a computer with a lot of memory, you might be able to load them all to RAM. if they're, say, 10 minutes each, and you have 8, then thats 10*60*44100*8 channels * 4 bytes (32 bit sample data) = 810MB of data. hrm. perhaps not..
And as Greg uses 88200 kHz samplerate, it's twice as much. ;)
Frank Barknecht _ ______footils.org__
On Tue, 2008-07-15 at 19:09 +0200, Frank Barknecht wrote:
Hallo, Damian Stewart hat gesagt: // Damian Stewart wrote:
Dr. Greg Wilder wrote:
Matt Barber wrote:
Hey Greg,
I wonder what would happen if you split your 8-channel files and played them simultaneously as 4-channel soundfiles...
how long are the files? if they're not so long, and you've got a computer with a lot of memory, you might be able to load them all to RAM. if they're, say, 10 minutes each, and you have 8, then thats 10*60*44100*8 channels * 4 bytes (32 bit sample data) = 810MB of data. hrm. perhaps not..
And as Greg uses 88200 kHz samplerate, it's twice as much. ;)
there were several discussions on this list about the precision problem when using [tabread*] with big tables (above 16777216 samples not every sample can be accessed anymore). i wonder now, wether this applies as well to [tabplay~], or is this one not using some kind of indexing, but just plays samples consecutively?
as an alternative: assuming the files were in 16 bit, wouldn't it make more sense to create a ramdisk and store all wav-files there in order to read them with [readsf~]? this way you would save half of the amount, because they are stored as 16 bit instead of 32 bit then.
roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
Hallo, Roman Haefeli hat gesagt: // Roman Haefeli wrote:
as an alternative: assuming the files were in 16 bit, wouldn't it make more sense to create a ramdisk and store all wav-files there in order to read them with [readsf~]? this way you would save half of the amount, because they are stored as 16 bit instead of 32 bit then.
This might be useful anyway, as harddisk access can be the number one latency killer.
Frank Barknecht _ ______footils.org__
Of course, the 8-channel environment is useful for its ambisonic and other spatialization potential, and one solution that works well (for certain musical situations) is to spatialize monophonic soundfiles in real time. This is a great solution for reducing performance demand on the hard drive, but quickly becomes expensive in CPU cycles...
Right. You can split the difference, though, if you're using ambisonics, provided you're using B-format (wxyz). You can do all your ambisonic encoding and room simulation ahead of time, and then do the decoding in Pd. This way you'd only be reading four channels at a time, and the conversion from B-format to 8-channels is a fairly inexpensive set of multiplies and adds (a little more expensive if it's a "cube" rather than an "octagon" array, I think, since you could discard the "Z" harmonic with the octagon; in Pd a cube decode could be on the order of 24 +'s and as few as 4 *'s, most of the adds taking place in connections as the vectors are automatically added) -- you could easily make an abstraction to just put on the end right before you send it to [dac~], since b-format streams should mix linearly. I'm sure there are externals which could do this more efficiently than an abstraction (loathe as I am to use externals when there's an easy abstraction solution). In this setup, normalization becomes a little harder, though.
It would also be useful if you later wanted to do some simple ambisonic "panning" of the solo marimba throughout the array - you'd have half the architecture you'd need for it, and B-format encoding of two or three streams is not gonna break the bank (unless you were doing some kind of full-on room simulation on top of it).
The point is moot if you're using 2nd-order ambisonics, though, or if you've already spent a lot of time mixing and normalizing.
As an aside, for pieces with different sections and patches with modularized processes, it might be a good idea to use the fade-in and fade-out in conjunction with [switch~] for expensive processes so that you're only burning cycles when the subpatch or abstraction for that section or process is being used -- but you have to be careful when there are delays involved since, iirc, delay lines maintain their state when they're shut off. You also have to insulate its [line~]'s and such from being triggered while it's off -- but it's easy enough to simply use [spigot]s to keep the patch from receiving any message at all.
Matt
Matt Barber wrote:
Of course, the 8-channel environment is useful for its ambisonic and other spatialization potential, and one solution that works well (for certain musical situations) is to spatialize monophonic soundfiles in real time. This is a great solution for reducing performance demand on the hard drive, but quickly becomes expensive in CPU cycles...
Right. You can split the difference, though, if you're using ambisonics, provided you're using B-format (wxyz). You can do all your ambisonic encoding and room simulation ahead of time, and then do the decoding in Pd. This way you'd only be reading four channels at a time, and the conversion from B-format to 8-channels is a fairly inexpensive set of multiplies and adds (a little more expensive if it's a "cube" rather than an "octagon" array, I think, since you could discard the "Z" harmonic with the octagon; in Pd a cube decode could be on the order of 24 +'s and as few as 4 *'s, most of the adds taking place in connections as the vectors are automatically added) -- you could easily make an abstraction to just put on the end right before you send it to [dac~], since b-format streams should mix linearly. I'm sure there are externals which could do this more efficiently than an abstraction (loathe as I am to use externals when there's an easy abstraction solution). In this setup, normalization becomes a little harder, though.
It would also be useful if you later wanted to do some simple ambisonic "panning" of the solo marimba throughout the array - you'd have half the architecture you'd need for it, and B-format encoding of two or three streams is not gonna break the bank (unless you were doing some kind of full-on room simulation on top of it).
The point is moot if you're using 2nd-order ambisonics, though, or if you've already spent a lot of time mixing and normalizing.
Great points all around. Of course I spent a great deal of time considering a range of similar approaches before I began work on the project. The commission dictated a high-resolution, 8-channel cube array, and the decision to avoid B-format came down to the fact that I wasn't happy with reverberation quality produced by the available csound ambisonic and spatialization algorithms.
I knew I was giving up a certain amount of flexibility by directly rendering the files (using csound and a custom java-based preprocessor), but it seems I didn't quite anticipate the heavy demand the files would put on the playback system.
G