Hello,
When i try to send data from a sub-process using [udpsend] from iemlib (or mrpeach), it is impossible. The sub-process seems to block this send. The sub-process is created with [pd~]. Is it a normal behavior ? Any clue to resolve this problem ?
In fact, i'm trying to send back to the main patch data from MSD in a sub-process to separate GEM from MSD. When i only use [pd~] and [stdout], it works but it is very slow, that's why i'm trying now with [udpsend] without [stdout] hoping to gain speed. Thanx. ++
Jack
Sorry for the noise, it is working with [udpsend] ! But it is still very slow. The best is to use GEM and MSD in the same patch. Is it normal the data transfers are so slow with sub-process (using [pd~]) ? Is there another method to accelerate this transfert between GEM and MSD using [pd~] and [stdout] ? Thanx. ++
Jack
Le vendredi 20 août 2010 à 14:53 +0200, Jack a écrit :
Hello,
When i try to send data from a sub-process using [udpsend] from iemlib (or mrpeach), it is impossible. The sub-process seems to block this send. The sub-process is created with [pd~]. Is it a normal behavior ? Any clue to resolve this problem ?
In fact, i'm trying to send back to the main patch data from MSD in a sub-process to separate GEM from MSD. When i only use [pd~] and [stdout], it works but it is very slow, that's why i'm trying now with [udpsend] without [stdout] hoping to gain speed. Thanx. ++
Jack
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
is the child or the parent doing the audio?
Am 20.08.2010 um 15:04 schrieb Jack:
Sorry for the noise, it is working with [udpsend] ! But it is still very slow. The best is to use GEM and MSD in the same patch. Is it normal the data transfers are so slow with sub-process (using [pd~]) ? Is there another method to accelerate this transfert between GEM and MSD using [pd~] and [stdout] ? Thanx. ++
Jack
Le vendredi 20 août 2010 à 14:53 +0200, Jack a écrit :
Hello,
When i try to send data from a sub-process using [udpsend] from iemlib (or mrpeach), it is impossible. The sub-process seems to block this send. The sub-process is created with [pd~]. Is it a normal behavior ? Any clue to resolve this problem ?
In fact, i'm trying to send back to the main patch data from MSD in a sub-process to separate GEM from MSD. When i only use [pd~] and [stdout], it works but it is very slow, that's why i'm trying now with [udpsend] without [stdout] hoping to gain speed. Thanx. ++
Jack
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Hello Max,
There is no audio object in the both patch. But the audio is active in both. In the patch with GEM process, i have [pd~ -ninsig 1 -noutsig 1] and in the patch with MSD i have [stdout]. Thanx. ++
Jack
Le vendredi 20 août 2010 à 15:10 +0200, Max a écrit :
is the child or the parent doing the audio?
Am 20.08.2010 um 15:04 schrieb Jack:
Sorry for the noise, it is working with [udpsend] ! But it is still very slow. The best is to use GEM and MSD in the same patch. Is it normal the data transfers are so slow with sub-process (using [pd~]) ? Is there another method to accelerate this transfert between GEM and MSD using [pd~] and [stdout] ? Thanx. ++
Jack
Le vendredi 20 août 2010 à 14:53 +0200, Jack a écrit :
Hello,
When i try to send data from a sub-process using [udpsend] from iemlib (or mrpeach), it is impossible. The sub-process seems to block this send. The sub-process is created with [pd~]. Is it a normal behavior ? Any clue to resolve this problem ?
In fact, i'm trying to send back to the main patch data from MSD in a sub-process to separate GEM from MSD. When i only use [pd~] and [stdout], it works but it is very slow, that's why i'm trying now with [udpsend] without [stdout] hoping to gain speed. Thanx. ++
Jack
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Fri, 20 Aug 2010, Jack wrote:
Is there another method to accelerate this transfert between GEM and MSD using [pd~] and [stdout] ?
Can you try communicating with OSC instead, and see whether it's faster ? It does lesser amounts of encoding and decoding for floats, than what [netsend]/[netreceive]/[stdout]/[pd~] need. How many floats do you need to send from one process to the other, per second ?
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard, Montréal, Québec. téléphone: +1.514.383.3801
Hello Mathieu,
I have already do that with [packOSC]/[unpackOSC] and [udpsend]/[udpreceive], it is slow too. I need to send 20000 lists of 3 floats (id, pos x, pos y) each frame (50 fps) from one process to other. Thanx. ++
Jack
Le vendredi 20 août 2010 à 10:20 -0400, Mathieu Bouchard a écrit :
On Fri, 20 Aug 2010, Jack wrote:
Is there another method to accelerate this transfert between GEM and MSD using [pd~] and [stdout] ?
Can you try communicating with OSC instead, and see whether it's faster ? It does lesser amounts of encoding and decoding for floats, than what [netsend]/[netreceive]/[stdout]/[pd~] need. How many floats do you need to send from one process to the other, per second ?
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard, Montréal, Québec. téléphone: +1.514.383.3801
maybe as audio signal through JACK? That is the fastest way I'm aware. That would be 441000 values per second per channel -1.0/+1.0, then you had to rescale again.
2010/8/20 Jack jack@rybn.org:
Hello Mathieu,
I have already do that with [packOSC]/[unpackOSC] and [udpsend]/[udpreceive], it is slow too. I need to send 20000 lists of 3 floats (id, pos x, pos y) each frame (50 fps) from one process to other. Thanx. ++
Jack
Le vendredi 20 août 2010 à 10:20 -0400, Mathieu Bouchard a écrit :
On Fri, 20 Aug 2010, Jack wrote:
Is there another method to accelerate this transfert between GEM and MSD using [pd~] and [stdout] ?
Can you try communicating with OSC instead, and see whether it's faster ? It does lesser amounts of encoding and decoding for floats, than what [netsend]/[netreceive]/[stdout]/[pd~] need. How many floats do you need to send from one process to the other, per second ?
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard, Montréal, Québec. téléphone: +1.514.383.3801
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 08/20/2010 04:58 PM, Bernardo Barros wrote:
maybe as audio signal through JACK? That is the fastest way I'm aware. That would be 441000 values per second per channel -1.0/+1.0, then you had to rescale again.
is that true. jack internally uses floating point samples, so i don't see a reason why jack should not be able to transmit samples outside the [-1..+1] range.
apart from that, you won't be able to use jack from within [pd~], as the [dac~]s in [pd~] are mapped to the outlet~s of the object (that is: the inner pd cannot connect to the audio api directly)
if course you could just use 2 separate Pd's anyhow, in which case jack would work.
an even faster way to transport would be Gem's shared memory objects (see [pix_share_read]). however it's probably not so fast to convert the data to/from the pix format.
fgmasdr IOhannes
Hi Johannes!
Hum, I used csound/supercollider terminology here. Audio rate is sample precision, control rate is one value per block, right? He needs 20000 floating-points values per second, I can only think of audio signals for this.
Well, if he start pd with "pd -jack" and make three more channels he can connect these extra channels from puredata:0 as input to puredata:1 in JACK, right? (QJackCtl or Patchage would make it very easy).
I did not know JACK could deal with values outside the -1/+1 range.
2010/8/20 IOhannes m zmölnig zmoelnig@iem.at:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 08/20/2010 04:58 PM, Bernardo Barros wrote:
maybe as audio signal through JACK? That is the fastest way I'm aware. That would be 441000 values per second per channel -1.0/+1.0, then you had to rescale again.
is that true. jack internally uses floating point samples, so i don't see a reason why jack should not be able to transmit samples outside the [-1..+1] range.
apart from that, you won't be able to use jack from within [pd~], as the [dac~]s in [pd~] are mapped to the outlet~s of the object (that is: the inner pd cannot connect to the audio api directly)
if course you could just use 2 separate Pd's anyhow, in which case jack would work.
an even faster way to transport would be Gem's shared memory objects (see [pix_share_read]). however it's probably not so fast to convert the data to/from the pix format.
fgmasdr IOhannes
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAkxusKEACgkQkX2Xpv6ydvSEdwCghiXF587vWRRZ95fIZud414Qj xs0An2077wpG2lonhA59im5OyDWJrA6n =e910 -----END PGP SIGNATURE-----
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Fri, 2010-08-20 at 18:43 +0200, IOhannes m zmölnig wrote:
On 08/20/2010 04:58 PM, Bernardo Barros wrote:
maybe as audio signal through JACK? That is the fastest way I'm aware. That would be 441000 values per second per channel -1.0/+1.0, then you had to rescale again.
is that true. jack internally uses floating point samples, so i don't see a reason why jack should not be able to transmit samples outside the [-1..+1] range.
Indeed, I can confirm it does not truncate audio signals to -1....+1. Neither does Pd. Not that I have a good application for this in mind, but I find this valuable to know.
Roman
On Fri, 20 Aug 2010, Roman Haefeli wrote:
Indeed, I can confirm it does not truncate audio signals to -1....+1. Neither does Pd. Not that I have a good application for this in mind, but I find this valuable to know.
Just like for within Pd, it means that you don't have to worry about volume levels and clipping until the final step before the [dac~].
(Unless you hit "infinity" or NaN...)
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard, Montréal, Québec. téléphone: +1.514.383.3801
On Fri, 2010-08-20 at 23:01 -0400, Mathieu Bouchard wrote:
On Fri, 20 Aug 2010, Roman Haefeli wrote:
Indeed, I can confirm it does not truncate audio signals to -1....+1. Neither does Pd. Not that I have a good application for this in mind, but I find this valuable to know.
Just like for within Pd, it means that you don't have to worry about volume levels and clipping until the final step before the [dac~].
No, even [dac~] allows for higher values than 1, when connected to jack. So actually you really only need to worry about the levels just before you route a [dac~] to a physical port with jack.
Roman
On Sat, 21 Aug 2010, Roman Haefeli wrote:
On Fri, 2010-08-20 at 23:01 -0400, Mathieu Bouchard wrote:
On Fri, 20 Aug 2010, Roman Haefeli wrote:
Indeed, I can confirm it does not truncate audio signals to -1....+1. Neither does Pd. Not that I have a good application for this in mind, but I find this valuable to know.
Just like for within Pd, it means that you don't have to worry about volume levels and clipping until the final step before the [dac~].
No, even [dac~] allows for higher values than 1, when connected to jack. So actually you really only need to worry about the levels just before you route a [dac~] to a physical port with jack.
That's what I was trying to say, but in the analogy with Pd, I said [dac~] instead of DAC or instead of soundcard... sorry.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard, Montréal, Québec. téléphone: +1.514.383.3801
There's [udpsend~] and [udpreceive]~ for sending multichannel signals.
Martin
bernardo wrote:
maybe as audio signal through JACK? That is the fastest way I'm aware. That would be 441000 values per second per channel -1.0/+1.0, then you had to rescale again.
2010/8/20 Jack jack@rybn.org:
Hello Mathieu,
I have already do that with [packOSC]/[unpackOSC] and [udpsend]/[udpreceive], it is slow too. I need to send 20000 lists of 3 floats (id, pos x, pos y) each frame (50 fps) from one process to other. Thanx. ++
Jack
Le vendredi 20 août 2010 à 10:20 -0400, Mathieu Bouchard a écrit :
On Fri, 20 Aug 2010, Jack wrote:
Is there another method to accelerate this transfert between GEM and MSD using [pd~] and [stdout] ?
Can you try communicating with OSC instead, and see whether it's faster ? It does lesser amounts of encoding and decoding for floats, than what [netsend]/[netreceive]/[stdout]/[pd~] need. How many floats do you need to send from one process to the other, per second ?
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard, Montréal, Québec. téléphone: +1.514.383.3801
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Fri, 2010-08-20 at 16:36 +0200, Jack wrote:
Hello Mathieu,
I have already do that with [packOSC]/[unpackOSC] and [udpsend]/[udpreceive], it is slow too. I need to send 20000 lists of 3 floats (id, pos x, pos y) each frame (50 fps) from one process to other. Thanx. ++
It seems to me as you were creating a lot of overhead for tranmitting only 3 floats. First there is the OSC overhead per message, then each OSC message is sent over UDP, while adding some datagram overhead. By sending 20'000 or even 60'000 floats per message, you could drastically reduce the OSC and UDP protocol overhead. Don't know if this is the real reason for being so slow, but I'd try to reformat your messages. Also, this would probably mean using less computing power for creating all those messages and packets.
Roman
Hello Roman,
Le vendredi 20 août 2010 à 23:29 +0200, Roman Haefeli a écrit :
On Fri, 2010-08-20 at 16:36 +0200, Jack wrote:
Hello Mathieu,
I have already do that with [packOSC]/[unpackOSC] and [udpsend]/[udpreceive], it is slow too. I need to send 20000 lists of 3 floats (id, pos x, pos y) each frame (50 fps) from one process to other. Thanx. ++
It seems to me as you were creating a lot of overhead for tranmitting only 3 floats. First there is the OSC overhead per message, then each OSC message is sent over UDP, while adding some datagram overhead. By sending 20'000 or even 60'000 floats per message, you could drastically reduce the OSC and UDP protocol overhead. Don't know if this is the real reason for being so slow, but I'd try to reformat your messages. Also, this would probably mean using less computing power for creating all those messages and packets.
In fact, i have tried one more time with only [pd~] and [stdout] and it seems to be faster than [pd~] and [udpsend]/[udpreceive]. But Pd freeze if i send a lot of data to [pd~] :/ One remark : I am not sending 20000 or 60000 floats per message but 20000 messages of 3 floats every frame (near every 20 ms). The problem seems to be the transfert of the packets not the creation off the messages (they are created by [msd2D] with only one message). Thanx. ++
Jack
Roman
On Fri, 2010-08-20 at 23:54 +0200, Jack wrote:
One remark : I am not sending 20000 or 60000 floats per message but 20000 messages of 3 floats every frame (near every 20 ms).
Yeah, I know, but I actually meant to propose to do so. Sending 20'000 times a triple message is what is causing high amounts of overhead.
The problem seems to be the transfert of the packets not the creation off the messages (they are created by [msd2D] with only one message). Thanx.
The transfer might be a problem _because_ of the huge overhead.
Roman
OK, i understand now, i will give it a try. Thanx again. ++
Jack
Le samedi 21 août 2010 à 11:11 +0200, Roman Haefeli a écrit :
On Fri, 2010-08-20 at 23:54 +0200, Jack wrote:
One remark : I am not sending 20000 or 60000 floats per message but 20000 messages of 3 floats every frame (near every 20 ms).
Yeah, I know, but I actually meant to propose to do so. Sending 20'000 times a triple message is what is causing high amounts of overhead.
The problem seems to be the transfert of the packets not the creation off the messages (they are created by [msd2D] with only one message). Thanx.
The transfer might be a problem _because_ of the huge overhead.
Roman
On Fri, 20 Aug 2010, Roman Haefeli wrote:
On Fri, 2010-08-20 at 16:36 +0200, Jack wrote:
I have already do that with [packOSC]/[unpackOSC] and [udpsend]/[udpreceive], it is slow too. I need to send 20000 lists of 3 floats (id, pos x, pos y) each frame (50 fps) from one process to other.
It seems to me as you were creating a lot of overhead for tranmitting only 3 floats. First there is the OSC overhead per message, then each OSC message is sent over UDP, while adding some datagram overhead. By sending 20'000 or even 60'000 floats per message, you could drastically reduce the OSC and UDP protocol overhead. Don't know if this is the real reason for being so slow, but I'd try to reformat your messages. Also, this would probably mean using less computing power for creating all those messages and packets.
Yes, and if you have to respect some limit on the packet size, then the ideal way to split it is in equal parts. So if the number is exactly 20000 triplets, try 100 triplets (12 bytes per triplet gives 1200 bytes) per packet : it will reduce the overhead of single triplet packets by 99%. That means 200 packets per frame.
But it would be more efficient if [pd~] had a transparent message-passing interface based on mmap() or equivalent.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard, Montréal, Québec. téléphone: +1.514.383.3801