hi all,
messaging functions called by dsp routines are usually implemented by using clocks with a delay 0. this only works as long as the clock callbacks are executed after each dsp tick. this causes problems for my scheduler, that uses the callback from the audio api (jack / portaudio) to run the dsp tick. running all the clock callbacks from the audio callback is not a good idea, since it's running from a separate thread and possibly in realtime (i.e. gem would crash).
thus i'd propose the following api extension: sys_postdsp_callback(t_method fn);
the function fn would be called immediately after dsp_tick() is executed.
this would work for both miller's synchronous and my asynchronous scheduler and is probably more elegant than the current solution.
drawback: external developers would have to change their externals to improve the dsp/message interaction granularity for my asynchronous scheduler...
off course, there are ugly workarounds for this (i just implemented one), but i'd prefer to have a clean solution ...
i would be interested in other opinions ... especially in miller's
cheers .... tim
Hi Tim,
Here's another idea I'm thinking about that might be useful in this situation... I'm thinking of having a version of switch~, which never runs DSP except when you 'bang' it. This way, a block of DSP analysis or anything else could be included within a message; in fact, several of them could, and they could be interspersed with other message processing.
My own interest in doing this is that it would allow using tilde objects to treat video. I'd make special tilde objects to read lines out of a pdp buffer and write them back in; then just insert any DSP chain into a pdp chain and make computer music with the pixels...
cheers Miller
On Fri, Jul 15, 2005 at 02:39:18PM +0200, Tim Blechmann wrote:
hi all,
messaging functions called by dsp routines are usually implemented by using clocks with a delay 0. this only works as long as the clock callbacks are executed after each dsp tick. this causes problems for my scheduler, that uses the callback from the audio api (jack / portaudio) to run the dsp tick. running all the clock callbacks from the audio callback is not a good idea, since it's running from a separate thread and possibly in realtime (i.e. gem would crash).
thus i'd propose the following api extension: sys_postdsp_callback(t_method fn);
the function fn would be called immediately after dsp_tick() is executed.
this would work for both miller's synchronous and my asynchronous scheduler and is probably more elegant than the current solution.
drawback: external developers would have to change their externals to improve the dsp/message interaction granularity for my asynchronous scheduler...
off course, there are ugly workarounds for this (i just implemented one), but i'd prefer to have a clean solution ...
i would be interested in other opinions ... especially in miller's
cheers .... tim
-- mailto:TimBlechmann@gmx.de ICQ: 96771783 http://www.mokabar.tk
latest mp3: kMW.mp3 http://mattin.org/mp3.html
latest cd: Goh Lee Kwang & Tim Blechmann: Drone http://www.geocities.com/gohleekwangtimblechmannduo/
After one look at this planet any visitor from outer space would say "I want to see the manager." William S. Burroughs
PD-dev mailing list PD-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
hi miller ...
well, i think, these are two different issues ... after doing some tests during the last few days, i think, it's more a problem about having both high and low priority messaging ... and thus, only functions that are realtime safe should be allowed as clock callbacks ... otherwise, it would bind the control rate to the hardware buffer size for my asynchronous scheduler ... running not realtime safe functions might cause jack to kick pd from the dsp graph... but for now, this is a user bug :-/ not realtime-safe functions should either run in a helper thread or use the idle callbacks to run at lower priority ...
Here's another idea I'm thinking about that might be useful in this situation... I'm thinking of having a version of switch~, which never runs DSP except when you 'bang' it. This way, a block of DSP analysis or anything else could be included within a message; in fact, several of them could, and they could be interspersed with other message processing.
i like this idea ... it would make it easier to adapt pd to apply a dsp graph to a buffer, so it would be possible to do non-realtime stuff like with vasp ... one could even write a wave editor with pd :-)
cheers ... tim
Miller Puckette wrote:
My own interest in doing this is that it would allow using tilde objects to treat video. I'd make special tilde objects to read lines out of a pdp buffer and write them back in; then just insert any DSP chain into a pdp chain and make computer music with the pixels...
cheers Miller
I have been converting images to sound with Gridflow, it's quite fun:
grid in | [#cast float32] | [# / 255] | [# - 0.5] | [#export_list] | [listprepend 0] | [send $0-mysound]
[table $0-mysound 128]
with various logic to extract different regions/channels of images to send to the table.
The only problem is that I have to set the audio buffer size very high (370ms) to avoid clicks when each frame is generated. I don't know if using [any] and [delay] to stagger the computation through time would work - ie, not crash ;) - I guess I'll have to try it.
I did a live set for http://leplacard.org on Thursday using video->sound conversion, which can be downloaded (together with the Pd+Gridflow patches used in the performance) from:
http://www.archive.org/audio/audio-details-db.php?collectionid=ClaudiusMaxim...
or (same page)
http://makeashorterlink.com/?L31364F6B
A much simpler (and commented!) implementation of this technique is attached, whereas the performance patch contains quite a few other things which I haven't bothered to document. Warning: the output is loud!
#N canvas 0 6 492 450 10; #N canvas 522 6 476 479 feedbackconvolution 1; #X obj 8 183 #store; #X obj 34 261 #remap_image; #X obj 217 258 #rotate 100; #X obj 217 281 # *>>8 250; #X obj 44 112 loadbang; #X obj 45 159 # rand; #X obj 62 407 #out window; #X obj 8 11 bng 15 250 50 0 empty empty empty 0 -6 0 8 -262144 -1 -1 ; #X floatatom 412 161 5 0 0 0 - - -; #X floatatom 412 181 5 0 0 0 - - -; #X obj 305 234 #pack 2; #X floatatom 322 283 5 0 0 0 - - -; #X floatatom 323 303 5 0 0 0 - - -; #X obj 34 39 tgl 15 0 empty empty empty 0 -6 0 8 -262144 -1 -1 0 1 ; #X obj 62 432 fps; #X floatatom 62 457 5 0 0 0 - - -; #X obj 34 208 #convolve; #X obj 8 459 outlet; #X obj 34 230 # >> 1; #X msg 92 186 ( 1 2 # 1 ); #X msg 45 136 ( 128 128 3 # 255 ); #X obj 34 82 metro 500; #X obj 81 290 # - 128; #X obj 82 313 # *>>8 260; #X floatatom 161 349 5 0 0 0 - - -; #X floatatom 160 367 5 0 0 0 - - -; #X obj 81 334 # + 128; #X obj 217 235 # - ( 0 0 ); #X obj 217 308 # + ( 0 0 ); #X obj 81 358 # % 256; #X floatatom 92 61 5 0 0 0 - - -; #X text 60 39 <-- automatically generate frames; #X text 136 60 <-- rate control; #X text 144 113 <-- randomize image; #X obj 121 113 bng 15 250 50 0 empty empty empty 0 -6 0 8 -262144 -1 -1; #X text 151 406 <-- display; #X text 114 456 <-- actual frame rate; #X text 204 348 <-- colour amplification; #X text 111 209 <-- convolution is blurring in this case; #X text 367 302 <-- scaling; #X text 369 281 <-- rotation; #X text 200 171 center of transformation -->; #X text 36 12 <-- generate a frame; #X text 204 369 <-- colour amplification offset; #X text 145 263 transform; #X connect 0 0 16 0; #X connect 0 0 17 0; #X connect 1 0 6 0; #X connect 1 0 22 0; #X connect 1 1 27 0; #X connect 2 0 3 0; #X connect 3 0 28 0; #X connect 4 0 20 0; #X connect 4 0 19 0; #X connect 5 0 0 1; #X connect 6 0 14 0; #X connect 7 0 0 0; #X connect 8 0 10 0; #X connect 9 0 10 1; #X connect 10 0 27 1; #X connect 10 0 28 1; #X connect 11 0 2 1; #X connect 12 0 3 1; #X connect 13 0 21 0; #X connect 14 0 15 0; #X connect 16 0 18 0; #X connect 18 0 1 0; #X connect 19 0 16 1; #X connect 20 0 5 0; #X connect 21 0 0 0; #X connect 22 0 23 0; #X connect 23 0 26 0; #X connect 24 0 23 1; #X connect 25 0 22 1; #X connect 25 0 26 1; #X connect 26 0 29 0; #X connect 27 0 2 0; #X connect 28 0 1 1; #X connect 29 0 0 1; #X connect 30 0 21 1; #X connect 34 0 20 0; #X restore 15 143 pd feedbackconvolution; #N canvas 481 538 450 640 gridtotables 1; #X obj 20 11 inlet grid; #X obj 20 310 #store; #X obj 20 537 shunt 3; #X obj 20 289 fork; #X obj 20 457 #export_list; #X obj 20 480 listprepend 0; #X obj 20 607 s $0-0; #X obj 42 587 s $0-1; #X obj 64 567 s $0-2; #X obj 20 35 #transpose 0 2; #X msg 20 267 ( 0 $1 ) , ( 1 $1 ) , ( 2 $1 ); #X obj 20 343 #cast float32; #X obj 20 366 # / ( float32 # 255 ); #X obj 21 388 # - ( float32 # 0.5 ); #X obj 21 408 # * ( float32 # 2 ); #X obj 63 514 #unpack 2; #X obj 64 183 float 0; #X obj 137 207 + 1; #X obj 73 85 tgl 15 0 empty empty empty 0 -6 0 8 -262144 -1 -1 0 1 ; #X floatatom 124 105 5 0 0 0 - - -; #X obj 137 229 mod 128; #X obj 73 132 metro 50; #X text 166 103 <-- rate control for audio sweep; #X text 95 86 <-- audio sweep enable/disable; #X text 191 197 <-- iterate through scanlines; #X text 216 370 <-- scale image to audio; #X text 135 529 <-- send a line of each colour to a table; #X text 137 35 <-- convert packed-pixel to planar; #X text 229 273 <-- extract a line of each; #X text 258 288 channel from the image; #X text 127 479 <-- first index in list sent to table is; #X text 156 493 index to start writing at; #X connect 0 0 9 0; #X connect 1 0 11 0; #X connect 2 0 6 0; #X connect 2 1 7 0; #X connect 2 2 8 0; #X connect 3 0 1 0; #X connect 3 1 15 0; #X connect 4 0 5 0; #X connect 5 0 2 0; #X connect 9 0 1 1; #X connect 10 0 3 0; #X connect 11 0 12 0; #X connect 12 0 13 0; #X connect 13 0 14 0; #X connect 14 0 4 0; #X connect 15 0 2 1; #X connect 16 0 17 0; #X connect 16 0 10 0; #X connect 17 0 20 0; #X connect 18 0 21 0; #X connect 19 0 21 1; #X connect 20 0 16 1; #X connect 21 0 16 0; #X restore 83 183 pd gridtotables; #N canvas 0 538 450 348 tabletoaudio 1; #X obj 104 302 dac~; #X obj 104 223 hip~ 5; #X obj 209 218 hip~ 5; #X obj 211 174 +~; #X obj 105 178 -~; #X obj 41 136 tabread4~ $0-0; #X obj 175 135 tabread4~ $0-1; #X obj 282 135 tabread4~ $0-2; #X obj 125 51 phasor~ 440; #X obj 124 73 *~ 128; #X floatatom 125 -10 5 0 0 0 - - -; #X obj 125 12 mtof; #X obj 96 110 tabread4~ $0-0; #X obj 204 110 tabread4~ $0-1; #X obj 323 115 tabread4~ $0-2; #X obj 221 54 phasor~ 440; #X obj 220 76 *~ 128; #X floatatom 215 -12 5 0 0 0 - - -; #X obj 221 15 mtof; #X obj 105 199 /~ 2; #X obj 211 195 /~ 2; #X obj 195 301 writesf~ 2; #X msg 324 267 stop; #X obj 325 167 inlet record; #X obj 325 197 select 1 0; #X msg 261 244 open out.wav , start; #X text 4 69 play tables -->; #X text 10 194 mixer -->; #X text 7 224 DC cut -->; #X text 10 300 output -->; #X text 9 -9 MIDI notes -->; #X connect 1 0 0 0; #X connect 1 0 21 0; #X connect 2 0 0 1; #X connect 2 0 21 1; #X connect 3 0 20 0; #X connect 4 0 19 0; #X connect 5 0 4 0; #X connect 6 0 3 0; #X connect 6 0 4 1; #X connect 7 0 3 1; #X connect 8 0 9 0; #X connect 9 0 5 0; #X connect 9 0 6 0; #X connect 9 0 7 0; #X connect 10 0 11 0; #X connect 11 0 8 0; #X connect 12 0 3 0; #X connect 13 0 3 1; #X connect 13 0 4 0; #X connect 14 0 4 1; #X connect 15 0 16 0; #X connect 16 0 12 0; #X connect 16 0 13 0; #X connect 16 0 14 0; #X connect 17 0 18 0; #X connect 18 0 15 0; #X connect 19 0 1 0; #X connect 20 0 2 0; #X connect 22 0 21 0; #X connect 23 0 24 0; #X connect 24 0 25 0; #X connect 24 1 22 0; #X connect 25 0 21 0; #X restore 69 211 pd tabletoaudio; #X obj 14 293 table $0-0 128; #X obj 14 313 table $0-1 128; #X obj 14 332 table $0-2 128; #X obj 68 78 tgl 15 0 empty empty rec 0 -6 0 8 -258699 -1 -1 0 1; #N canvas 482 120 264 620 saveimagesequence 0; #X obj 15 45 fork; #X obj 38 70 t b; #X obj 15 9 inlet grid; #X obj 38 95 spigot; #X msg 179 67 0; #X obj 181 46 loadbang; #X obj 113 10 inlet record; #X obj 38 116 float 0; #X obj 103 141 + 1; #X obj 38 169 makefilename img%06d.jpg; #X obj 38 193 pack s; #X obj 9 143 spigot; #X msg 38 214 open $1; #X obj 9 254 #out; #X connect 0 0 11 0; #X connect 0 1 1 0; #X connect 1 0 3 0; #X connect 2 0 0 0; #X connect 3 0 7 0; #X connect 4 0 3 1; #X connect 4 0 11 1; #X connect 4 0 7 1; #X connect 5 0 4 0; #X connect 6 0 3 1; #X connect 6 0 11 1; #X connect 7 0 8 0; #X connect 7 0 9 0; #X connect 8 0 7 1; #X connect 9 0 10 0; #X connect 10 0 12 0; #X connect 11 0 13 0; #X connect 12 0 13 0; #X restore 15 242 pd saveimagesequence; #X text 99 77 <-- DON'T CLICK until you have inspected the; #X text 126 92 rest of the patch and are sure it won't; #X text 123 108 overwrite any important files!!!; #X text 7 8 ClaudiusMaximus - Copy Me - Scanned (version 0.2); #X text 7 24 http://claudiusmaximus.tk; #X text 6 40 Requires: Gridflow (http://gridflow.ca); #X text 133 312 <-- image data gets copied to these tables; #X text 200 210 <-- generate audio from tables; #X text 202 183 <-- copy image to tables; #X text 205 142 <-- generate images; #X text 201 241 <-- save video as a sequence of jpegs; #X text 17 373 TIP: increase the audio buffer to avoid dropouts , the lack of threading in Pd means that all the work for each frame is done at once while blocking audio. I set mine to 500ms or so when using this patch.; #X connect 0 0 1 0; #X connect 0 0 7 0; #X connect 6 0 2 0; #X connect 6 0 7 1;
On Sun, 17 Jul 2005, ClaudiusMaximus wrote:
The only problem is that I have to set the audio buffer size very high (370ms) to avoid clicks when each frame is generated. I don't know if using [any] and [delay] to stagger the computation through time would work - ie, not crash ;) - I guess I'll have to try it.
It will crash because, in GridFlow, a grid message expires just after the sending of the message returns. At the moment it expires, the transmission starts. The transmission ends before the sender of the grid message returns. Whenever a t_clock is triggered (by [delay] or [metro] or whatever), it's waaaay too late.
The streaming in GridFlow was not designed to allow smaller latencies. It was designed only to make it easy for a chain of objects to develop some kind of cache affinity. That was in the original design (April 2001 or so). Maybe it could change in the near future, but then, it wouldn't magically solve everything.
Individual object classes may be guilty of further problems. E.g. all image decoders supported by [#in] will decode all of the frame at once and all of that in the main thread too (GF doesn't use threads).
,-o--------o--------o--------o-. ,---. irc.freenode.net #dataflow | | The Diagram is the Program tm| | ,-o-------------o--------------o-. `-o------------o-------------o-' | | Mathieu Bouchard (Montréal QC) | | téléphone:+1.514.383.3801`---' `-o-- http://artengine.ca/matju -'