Thank you Marius,
Do I need to use GEM? I'm not doing anything with video, or graphics, I'm working only with audio. What I'd like to do in a first stage is to send to the GPU a set of data (harmonic peaks from live signals) and do some calculations there, then read back that into the program and proceed according to the results. In a second stage, the FFT and the spectral analyis could be performed there too. Chris mentioned that latency might be an issue since reading data from the GPU is slow, I don't know much about it but I think it shouldn't be since many cards can (and in fact must) use the main memory, so reading from memory should be as fast as it is when I read it with the CPU.
Also, I'm working in Mac OS 10.5, using Miller's version of Pd. Have you have any problems with Leopard and your patches?
Thank you again, and I'll read the patches you post to see if I can get more ideas from them...
Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
hi, no, I don't know how to use glsl for audio. I thought you were talking about graphical things. for audio you somehow would have to copy a buffer of audio data to the graphicscard (I don't know an object that could do that, but maybe there is one?...), then do your computations and then read it back into ram and then pass it to the soundcard. I also don't see, how you would time and synchronize your graphics card with the soundcard. it is true that the gpu is unused most of the time and could be used to do fast calculations, but the main output for graphicscards is the screen and reading back into ram can be slow. (of course "slow" is a relative term). I also see a problem in the different formats for audio and images. most graphics cards are optimized for vec4 processings (4 parallel color channels). and audio... OTOH I know that max/msp has some shaders that do audio computation. You could have a look at that and then maybe know how to do it in pd, too. although I think that is all related to jitter functionality, and pd can't really do that. marius.
Julian Villegas wrote:
Thank you Marius,
Do I need to use GEM? I'm not doing anything with video, or graphics, I'm working only with audio. What I'd like to do in a first stage is to send to the GPU a set of data (harmonic peaks from live signals) and do some calculations there, then read back that into the program and proceed according to the results. In a second stage, the FFT and the spectral analyis could be performed there too. Chris mentioned that latency might be an issue since reading data from the GPU is slow, I don't know much about it but I think it shouldn't be since many cards can (and in fact must) use the main memory, so reading from memory should be as fast as it is when I read it with the CPU.
Also, I'm working in Mac OS 10.5, using Miller's version of Pd. Have you have any problems with Leopard and your patches?
Thank you again, and I'll read the patches you post to see if I can get more ideas from them...
Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Hallo!
it is true that the gpu is unused most of the time and could be used to do fast calculations, but the main output for graphicscards is the screen and reading back into ram can be slow. (of course "slow" is a relative term). I also see a problem in the different formats for audio and images. most graphics cards are optimized for vec4 processings (4 parallel color channels). and audio...
Well, there are already quite some papers/software for audio processing on the gpu (see e.g. http://www.gpgpu.org/cgi-bin/blosxom.cgi/Audio%20and%20Signal%20Processing/i...).
But AFAIK nobody did this with pd up to now ...
LG Georg
Max MSP/Jitter has bridge objects that convert from audio rate to
matrices and back, which would be needed in PD land to readback from
the GPU, and convert to an audio rate signal. Thats how this is done.
Otherwise i have no idea how you would implement souch a patch.
On Nov 10, 2007, at 8:55 AM, Georg Holzmann wrote:
Hallo!
it is true that the gpu is unused most of the time and could be
used to do fast calculations, but the main output for graphicscards is the screen and reading back into ram can be slow. (of course "slow" is a relative term). I also see a problem in the different formats for
audio and images. most graphics cards are optimized for vec4 processings (4 parallel color channels). and audio...Well, there are already quite some papers/software for audio
processing on the gpu (see e.g. http://www.gpgpu.org/cgi-bin/blosxom.cgi/Audio%20and%20Signal%20Processing/i...) .But AFAIK nobody did this with pd up to now ...
LG Georg
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
yo, ignore me, if i am saying something absolutely stupid and unrelated, but isn't that exactly, what [pix_sig2pix~] and its counterpart [pix_pix2sig~] do? of course you still would have to [pix_texture] first in order to transfer the data to the gpu and then afterwards [pix_snap] it to bring it back from gpu to a pix.
audioIn~ | [pix_sig2pix~] | [pix_texture] | [doSomeStuff] | [pix_snap] | [pix_pix2sig~] | audioOut~
from my experience, doing just all this transfers is pretty slow (especially [pix_snap], as others already mentioned)
roman
On Sat, 2007-11-10 at 13:59 -0500, vade wrote:
Max MSP/Jitter has bridge objects that convert from audio rate to
matrices and back, which would be needed in PD land to readback from
the GPU, and convert to an audio rate signal. Thats how this is done.
Otherwise i have no idea how you would implement souch a patch.On Nov 10, 2007, at 8:55 AM, Georg Holzmann wrote:
Hallo!
it is true that the gpu is unused most of the time and could be
used to do fast calculations, but the main output for graphicscards is the screen and reading back into ram can be slow. (of course "slow" is a relative term). I also see a problem in the different formats for
audio and images. most graphics cards are optimized for vec4 processings (4 parallel color channels). and audio...Well, there are already quite some papers/software for audio
processing on the gpu (see e.g. http://www.gpgpu.org/cgi-bin/blosxom.cgi/Audio%20and%20Signal%20Processing/i...) .But AFAIK nobody did this with pd up to now ...
LG Georg
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
hmm, I tried it, because I thought, that's it, but somehow it is not working. the problem is, that pix_snap is not working as expected: after "do some stuff" I am still on the graphics card, basically I created a texture, that I can reference as ID. and pix_snap, I don't know, does not create an output...
[gemhead]
|
| [ID
| |
[pix_texture]
|
[pix_snap]
|
[pix_pix2sig~ 8 8]
| | |
[dac~ 1 2 3]
marius.
Roman Haefeli wrote:
yo, ignore me, if i am saying something absolutely stupid and unrelated, but isn't that exactly, what [pix_sig2pix~] and its counterpart [pix_pix2sig~] do? of course you still would have to [pix_texture] first in order to transfer the data to the gpu and then afterwards [pix_snap] it to bring it back from gpu to a pix.
audioIn~ | [pix_sig2pix~] | [pix_texture] | [doSomeStuff] | [pix_snap] | [pix_pix2sig~] | audioOut~
from my experience, doing just all this transfers is pretty slow (especially [pix_snap], as others already mentioned)
roman
On Sat, 2007-11-10 at 13:59 -0500, vade wrote:
Max MSP/Jitter has bridge objects that convert from audio rate to
matrices and back, which would be needed in PD land to readback from
the GPU, and convert to an audio rate signal. Thats how this is done.
Otherwise i have no idea how you would implement souch a patch.On Nov 10, 2007, at 8:55 AM, Georg Holzmann wrote:
Hallo!
it is true that the gpu is unused most of the time and could be
used to do fast calculations, but the main output for graphicscards is the screen and reading back into ram can be slow. (of course "slow" is a relative term). I also see a problem in the different formats for
audio and images. most graphics cards are optimized for vec4 processings (4 parallel color channels). and audio...Well, there are already quite some papers/software for audio
processing on the gpu (see e.g. http://www.gpgpu.org/cgi-bin/blosxom.cgi/Audio%20and%20Signal%20Processing/i...) .But AFAIK nobody did this with pd up to now ...
LG Georg
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
marius schebella wrote:
hmm, I tried it, because I thought, that's it, but somehow it is not working. the problem is, that pix_snap is not working as expected: after "do some stuff" I am still on the graphics card, basically I created a texture, that I can reference as ID. and pix_snap, I don't
well, [pix_snap] will crate a snapshot of the rendered-buffer, not of the current texture. you need to display the texture by putting it onto a geo...
mfa.dr IOhannes
hi, at some point I had sound and image, but then something broke again, don't know what. also, the sound that I had was different than it should be (could be a int/float problem or related to something completely different), I also could not get the patch running at block~ 64, took almost 200% of cpu, so if someone wants to carry this on... and, I know, I attach the files as files, but they sometimes appear as text, don't know why. marius.
IOhannes m zmoelnig wrote:
marius schebella wrote:
hmm, I tried it, because I thought, that's it, but somehow it is not working. the problem is, that pix_snap is not working as expected: after "do some stuff" I am still on the graphics card, basically I created a texture, that I can reference as ID. and pix_snap, I don't
well, [pix_snap] will crate a snapshot of the rendered-buffer, not of the current texture. you need to display the texture by putting it onto a geo...
mfa.dr IOhannes
uniform sampler2D texture;
void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // perform standard transform on vertex gl_Position = ftransform();
}
uniform sampler2D texture; uniform float volume;
void main() { vec4 B = texture2D(texture, gl_TexCoord[0].st); gl_FragColor = B * volume; }
#N canvas 527 25 782 708 10; #X obj 450 384 pix_texture; #X obj 364 237 bang~; #X obj 100 420 phasor~ 1000; #X obj 204 327 bng 15 250 50 0 empty empty empty 0 -6 0 8 -262144 -1 -1; #N canvas 167 101 628 566 glsl 0; #X obj 76 486 glsl_program; #X obj 139 349 pack 0 0; #X obj 159 329 t b f; #X msg 139 369 link $1 $2; #X msg 99 449 print; #X obj 76 39 inlet; #X obj 76 269 glsl_fragment; #X obj 76 149 glsl_vertex; #X obj 76 506 outlet; #X obj 126 39 inlet; #X obj 176 39 inlet; #X obj 216 39 inlet; #X obj 146 169 change; #X obj 159 289 change; #X obj 189 419 print linking; #X obj 149 389 t b a a; #X obj 99 429 delay 0; #X obj 257 140 cnv 15 300 150 empty empty empty 20 12 0 14 -104052 -66577 0; #X text 300 185 Inlet 1: GEMlist; #X text 300 205 Inlet 2: vertex shader commands; #X text 300 225 Inlet 3: fragment shader commands; #X text 300 245 Inlet 4: glsl programm commands; #X text 290 165 Inlets:; #X connect 0 0 8 0; #X connect 1 0 3 0; #X connect 2 0 1 0; #X connect 2 1 1 1; #X connect 3 0 15 0; #X connect 4 0 0 0; #X connect 5 0 7 0; #X connect 6 0 0 0; #X connect 6 1 13 0; #X connect 7 0 6 0; #X connect 7 1 12 0; #X connect 9 0 7 0; #X connect 10 0 6 0; #X connect 11 0 0 0; #X connect 12 0 1 0; #X connect 13 0 2 0; #X connect 15 0 16 0; #X connect 15 1 0 0; #X connect 15 2 14 0; #X connect 16 0 4 0; #X restore 62 391 pd glsl; #X obj 204 307 loadbang; #X msg 154 363 open 01_basic.vert; #X msg 298 363 open 01_volume.frag; #X msg 109 363 print; #X obj 62 485 pix_texture; #X msg 474 359 mode $1; #X obj 524 339 tgl 15 0 empty empty empty 17 7 0 10 -262144 -1 -1 0 1; #X obj 120 440 noise~; #X obj 538 625 dac~; #X obj 60 135 gemframebuffer; #X floatatom 524 411 5 0 0 0 - - -; #X floatatom 113 160 5 0 0 0 - - -; #X obj 410 257 t b b b; #X floatatom 208 168 5 0 0 0 - - -; #X obj 450 517 pix_snap; #X msg 449 494 snap; #X msg 486 494 0 0; #X obj 322 601 pix_info _________; #X floatatom 319 625 5 0 0 0 - - -; #X floatatom 329 645 5 0 0 0 - - -; #X msg 79 323 volume $1; #X floatatom 78 299 5 0 0 0 - - -; #X floatatom 481 409 5 0 0 0 - - -; #X msg 94 86 dim 8 8; #X floatatom 129 513 5 0 0 0 - - -; #X obj 450 450 square 4; #X msg 110 108 dim 128 128; #X msg 78 276 1; #X obj 78 257 loadbang; #X msg 519 493 8 8; #X obj 125 283 cnv 15 30 30 empty empty empty 20 12 0 14 -81432 -66577 0; #X obj 652 123 cnv 15 30 30 empty empty empty 20 12 0 14 -81432 -66577 0; #X obj 361 88 cnv 15 30 30 empty empty empty 20 12 0 14 -81432 -66577 0; #X msg 367 115 ; pd dsp $1; #X obj 367 96 tgl 15 0 empty empty empty 17 7 0 10 -262144 -1 -1 1 1; #X obj 367 150 dsp; #X floatatom 367 186 5 0 0 0 - - -; #X obj 540 244 gemwin; #X msg 589 122 buffer 1; #X msg 589 141 1 , bang; #X msg 588 182 destroy; #X msg 564 74 create; #X msg 589 162 1; #X msg 589 202 color 1 1 0; #X obj 469 583 env~; #X floatatom 470 604 5 0 0 0 - - -; #X obj 63 508 square 1; #X obj 61 185 translateXYZ 0 0 -1.01; #X obj 450 472 t b a; #X obj 451 553 pix_pix2sig~ 8 8; #X obj 449 428 rotateXYZ 0 0 0; #X obj 614 72 cnv 15 30 30 empty empty empty 20 12 0 14 -81432 -66577 0; #X text 621 77 start here.; #X text 653 125 then this; #X text 133 286 play with vol; #X text 353 70 be sure sound is on; #X msg 595 235 dimen 8 8; #X text 216 220 this seems to be too fast; #X obj 409 235 metro 100; #X obj 408 215 tgl 15 0 empty empty empty 17 7 0 10 -262144 -1 -1 1 1; #X msg 595 270 dimen 400 400; #X obj 451 312 tgl 15 0 empty empty empty 17 7 0 10 -262144 -1 -1 1 1; #X msg 583 320 reset; #X obj 87 46 tgl 15 0 empty empty empty 17 7 0 10 -262144 -1 -1 1 1 ; #X obj 61 66 gemhead 49; #X obj 449 335 gemhead 50; #X obj 62 458 pix_sig2pix~; #X floatatom 541 375 5 0 0 0 - - -; #X text 343 29 CAN SOMEONE GET THIS WORKING???; #X connect 0 0 55 0; #X connect 0 1 15 0; #X connect 2 0 71 1; #X connect 3 0 6 0; #X connect 3 0 7 0; #X connect 4 0 71 0; #X connect 5 0 3 0; #X connect 6 0 4 1; #X connect 7 0 4 2; #X connect 8 0 4 3; #X connect 9 0 51 0; #X connect 9 1 29 0; #X connect 10 0 0 0; #X connect 11 0 10 0; #X connect 12 0 71 2; #X connect 14 0 52 0; #X connect 14 1 16 0; #X connect 14 1 0 1; #X connect 17 1 70 0; #X connect 17 2 69 0; #X connect 18 0 52 3; #X connect 19 0 54 0; #X connect 20 0 19 0; #X connect 21 0 19 1; #X connect 22 1 23 0; #X connect 22 2 24 0; #X connect 25 0 4 3; #X connect 26 0 25 0; #X connect 27 0 55 1; #X connect 28 0 14 0; #X connect 30 0 53 0; #X connect 31 0 14 0; #X connect 32 0 26 0; #X connect 33 0 32 0; #X connect 34 0 19 2; #X connect 39 0 38 0; #X connect 40 0 41 0; #X connect 43 0 42 0; #X connect 44 0 42 0; #X connect 45 0 42 0; #X connect 46 0 42 0; #X connect 47 0 42 0; #X connect 48 0 42 0; #X connect 49 0 50 0; #X connect 52 0 4 0; #X connect 53 0 20 0; #X connect 53 1 19 0; #X connect 54 0 22 0; #X connect 54 1 49 0; #X connect 54 1 13 0; #X connect 54 2 13 0; #X connect 54 3 13 0; #X connect 55 0 30 0; #X connect 61 0 42 0; #X connect 63 0 17 0; #X connect 64 0 63 0; #X connect 65 0 42 0; #X connect 66 0 70 0; #X connect 67 0 42 0; #X connect 68 0 69 0; #X connect 69 0 14 0; #X connect 70 0 0 0; #X connect 71 0 9 0; #X connect 72 0 0 1;
On 12 Nov 2007, at 5:52 AM, marius schebella wrote:
and, I know, I attach the files as files, but they sometimes appear
as text, don't know why. marius.
I looked into this a while ago - it depends on the 'format=flowed'
flag and the 'Content-Disposition' and 'Content-Type' flags on the
attachments - maybe your client has some settings to select these flags?
'Content-Disposition: inline' and it is displayed that way on my
client - all plain text (it fails for the binary file, which is not
displayed - but that file certainly shouldn't be described as 'text/
plain')
--------------070400050902010409090204 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit
hi, at some point I had sound and image, but then something broke again,
etc ..
mfa.dr IOhannes
--------------070400050902010409090204 Content-Type: text/plain; x-mac-type="0"; x-mac-creator="0"; name="01_basic.vert" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="01_basic.vert"
uniform sampler2D texture;
void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // perform standard transform on vertex gl_Position = ftransform();
}
--------------070400050902010409090204 Content-Type: text/plain; x-mac-type="0"; x-mac-creator="0"; name="01_volume.frag" Content-Transfer-Encoding: base64 Content-Disposition: inline; filename="01_volume.frag"
dW5pZm9ybSBzYW1wbGVyMkQgdGV4dHVyZTsKdW5pZm9ybSBmbG9hdCB2b2x1bWU7CgoNCn Zv aWQgbWFpbigpDQp7Cgl2ZWM0IEIgPSB0ZXh0dXJlMkQodGV4dHVyZSwgZ2xfVGV4Q29vcm Rb MF0uc3QpOwoJZ2xfRnJhZ0NvbG9yID0gQiAqIHZvbHVtZTsNCn0= --------------070400050902010409090204 Content-Type: text/plain; name="test2.pd" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="test2.pd"
#N canvas 527 25 782 708 10;
this mail gets displayed as attachments placed in the flow of the
text - note 'Content-Disposition: attachment' on the relevant parts.
This works well for most clients - the main differences are that some
clients display the attachments (or their names or links to them) in
the intended place in the body of the email, while others only
display the list of attachments separately.
--Apple-Mail-30--380763525 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII; format=flowed
test test test
--Apple-Mail-30--380763525 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; x-unix-mode=0644; name=-notes.txt Content-Disposition: attachment; filename=-notes.txt
textfile textfile textfile
--Apple-Mail-30--380763525 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed
test test test test
--Apple-Mail-30--380763525 Content-Transfer-Encoding: 7bit Content-Type: application/octet-stream; x-unix-mode=0644; name=colours.pd Content-Disposition: attachment; filename=colours.pd
#N canvas 385 152 626 609 10;
This chain is essentially the process although a few details prevent it from actually working properly. The audio needs to remain in floating point format and pix_texture and the read back from the screen are integer based. The solution is to use an offscreen framebuffer object (like gemframebuffer) to create a floating point rendering environment for the audio. This project would be best done as a custom object specific to audio rather than trying to use GEM.
I don't think this will be very efficient for a single stream of audio since the data size is so small and the time to read back is so long. Perhaps it could enable something like computing an FIR on 16 or more channels at once or doing convolution to emulate an entire mixing console though.
On 11/10/07, Roman Haefeli reduzierer@yahoo.de wrote:
audioIn~ | [pix_sig2pix~] | [pix_texture] | [doSomeStuff] | [pix_snap] | [pix_pix2sig~] | audioOut~
hi all
i made a silly patch, that let's you control the volume of an audio signal using [colorRGB], just to illustrate, that audio can be processed on gpu by using gem and puredata. i'd be interested, if someone has a more meaningful application using this approach. people talked about doing convolution on gpu. what does it need to actually do it?
http://romanhaefeli.net/software/pd/dsp_on_gpu.pd
roman
On Sun, 2007-11-11 at 09:09 -0600, chris clepper wrote:
This chain is essentially the process although a few details prevent it from actually working properly. The audio needs to remain in floating point format and pix_texture and the read back from the screen are integer based. The solution is to use an offscreen framebuffer object (like gemframebuffer) to create a floating point rendering environment for the audio. This project would be best done as a custom object specific to audio rather than trying to use GEM.
I don't think this will be very efficient for a single stream of audio since the data size is so small and the time to read back is so long. Perhaps it could enable something like computing an FIR on 16 or more channels at once or doing convolution to emulate an entire mixing console though.
On 11/10/07, Roman Haefeli reduzierer@yahoo.de wrote: audioIn~ | [pix_sig2pix~] | [pix_texture] | [doSomeStuff] | [pix_snap] | [pix_pix2sig~] | audioOut~
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
Hallo!
Max MSP/Jitter has bridge objects that convert from audio rate to
matrices and back, which would be needed in PD land to readback from
the GPU, and convert to an audio rate signal. Thats how this is done.
Otherwise i have no idea how you would implement souch a patch.
I think the easiest way to do it is simply in the C code of the audio externals (there exist already nice libraries for general computing on the GPU - as pointed to in the last link).
However, of course it would be also interesting to do that at the patching level, but I don't know if that is possible ...
LG Georg