Hi list,
Pd-extended (Pd-0.43.1 "extended-20120430" compiled 00:31:34 Apr 30 2012) is crashing when I send data to the SoundScape Renderer on Ubuntu 12.04 using tcpclient.
I'm using the latest SSR from here: http://spatialaudio.net/ssr/download/
but the crashing occurs with the older version ssr-0.3.4 as well. Once, Pd produced the following error message on crashing, but generally doesn't give any clues:
pdsend errorname: >>error writing "sock8": connection reset by peer<<
I've been able to simulate the crash with a simplified set up. If anyone can please have a look, I'd be very grateful. The archive is uploaded here:
http://reverberant.com/tmp/test.zip
To test it, you'd need to unzip the file, enter into the directory the directory with the extracted files, then, assuming the latest SSR is installed and jack is running at 44.1kHz, run:
ssr-binaural rei_voz4.asd &
in order to launch ssr with a scene.
Then, if your pd-extended is not installed as /usr/bin/pd-extended, you'll need to edit the path in startpd.sh to suit. Then run:
./startpd.sh
Once Pd is open, first click "localhost" (see yellow region #1) to make the connection with SSR, then wiggle the vertical slider (near yellow region #2) wildly until Pd crashes. At least, that's what happens to me. Some times it may take up to 30 sec or a minute to crash - or not at all. Usually it crashes in less than 15 sec. Pds audio processing does not have to be on for the crash to occur.
Can anyone replicate the crash?
Suggestions welcome!
Cheers,
Iain
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 2013-06-29 20:19, Iain Mott wrote:
Hi list,
Pd-extended (Pd-0.43.1 "extended-20120430" compiled 00:31:34 Apr 30 2012) is crashing when I send data to the SoundScape Renderer on Ubuntu 12.04 using tcpclient.
hmm, since Pd and SSR are only communicating via a network socket, i only have 2 possible explanations:
memory leak) in Pd
since there are several network implementations, which one are you using? mrpeach? iemnet??
do SSR and Pd agree on the actual protocol? e.g. OSC over TCP/IP (if it is that) used to be badly defined in the olde days, and you still see the deprecated non-SLIP implementation. afaik liblo only recently changed their TCP/IP code to use SLIP.
could you get a backtrace of the crash? [1]
do you notice anything weird? like when running htop besides Pd, do you see an excess of memory usage?
try running Pd with "-verbose -verbose -stderr"; sometimes their is a printout when Pd is crashing, which get's lost once the Pd-GUI closes (that is: quite immediately)
fgamsdr IOhannes
[1] http://wiki.debian.org/HowToGetABacktrace
Thanks very much IOhannes!
The mrpeach version was being loaded by default. When I use iemnet/tcpclient it doesn't crash. That's great.
There's a difference however in the way mrpeach/tcpclient and iemnet/tcpclient sends received data to its output.
Messages from SSR received by mrpeach/tcpclient are sent to its output as a list (if that's the right word), for example as:
60 117 112 100 97 116 101 62 60 115 111 117 114 99 101 32 105 100 61 39 49 39 32 108 101 118 101 108 61 39 45 57 56 46 56 53 51 49 39 47 62 60 115 111 117 114 99 101 32 105 100 61 39 50 39 32 108 101 118 101 108 61 39 45 49 48 48 46 49 55 55 39 47 62 60 115 111 117 114 99 101 32 105 100 61 39 51 39 32 108 101 118 101 108 61 39 45 49 48 48 46 53 53 54 39 47 62 60 115 111 117 114 99 101 32 105 100 61 39 52 39 32 108 101 118 101 108 61 39 45 57 56 46 51 50 51 57 39 47 62 60 47 117 112 100 97 116 101 62 0
This is easily converted to a readable XML string with "string2any 0 0" to get:
<update><source id='1' level='-98.8531'/><source id='2' level='-100.177'/><source id='3' level='-100.556'/><source id='4' level='-98.3239'/></update>
iemnet/tcpclient on the other hand sends individual numbers to its output as a stream rather than a list.
eg
60 117 112 100 97 ....... etc.
And there are various messages of various lengths. I guess if I use iemnet/tcpclient I'll need to find a different way of parsing these numbers.... Not my strong point with Pd!
I'll try the backtrace and other things you suggest and report back on mrpeach/tcpclient in another email.
Cheers and thanks again,
Iain
Em Mon, 2013-07-01 às 09:23 +0200, IOhannes m zmoelnig escreveu:
On 2013-06-29 20:19, Iain Mott wrote:
Hi list,
Pd-extended (Pd-0.43.1 "extended-20120430" compiled 00:31:34 Apr 30 2012) is crashing when I send data to the SoundScape Renderer on Ubuntu 12.04 using tcpclient.
hmm, since Pd and SSR are only communicating via a network socket, i only have 2 possible explanations:
- either the network code is broken
- or SSR sends data in a format that exposes/triggers a bug (e.g. a
memory leak) in Pd
since there are several network implementations, which one are you using? mrpeach? iemnet??
do SSR and Pd agree on the actual protocol? e.g. OSC over TCP/IP (if it is that) used to be badly defined in the olde days, and you still see the deprecated non-SLIP implementation. afaik liblo only recently changed their TCP/IP code to use SLIP.
could you get a backtrace of the crash? [1]
do you notice anything weird? like when running htop besides Pd, do you see an excess of memory usage?
try running Pd with "-verbose -verbose -stderr"; sometimes their is a printout when Pd is crashing, which get's lost once the Pd-GUI closes (that is: quite immediately)
fgamsdr IOhannes
[1] http://wiki.debian.org/HowToGetABacktrace _______________________________________________ Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 2013-07-01 16:40, Iain Mott wrote:
Thanks very much IOhannes!
The mrpeach version was being loaded by default. When I use iemnet/tcpclient it doesn't crash. That's great.
it is, though i'm sometimes under the impression that mrpeach is a bit more stable than iemnet (rather than the other way around)
There's a difference however in the way mrpeach/tcpclient and iemnet/tcpclient sends received data to its output.
Messages from SSR received by mrpeach/tcpclient are sent to its output as a list (if that's the right word), for example as:
[...]
This is easily converted to a readable XML string with "string2any 0 0" to get:
<update><source id='1' level='-98.8531'/><source id='2' level='-100.177'/><source id='3' level='-100.556'/><source id='4' level='-98.3239'/></update>
iemnet/tcpclient on the other hand sends individual numbers to its output as a stream rather than a list.
[...]
....... etc.
And there are various messages of various lengths. I guess if I use iemnet/tcpclient I'll need to find a different way of parsing these numbers.... Not my strong point with Pd!
well the point is, that TCP/IP as an underlying protocol doesn't know anything about packets - it is stream-based, like a serial connection. if you are relying on the packets coming out of the [tcpclient] object as strings of the "correct" length, then your code is broken. you *must* have a way to determine the end of an atomic junk of data without relying on the list length. [iemnet/tcpclient] makes this obvious from the beginning, as it will never lure you into security of "correct length" lists (though there is a hidden flag that gives you the same behaviour as [mrpeach/tcpclient]). outputting the bytes one-by-one should actually make it easier to re-packetize the data.
e.g. your example string looks like '0' was a delimiting character (at least it appears at the end of the [mrpeach/tcpclient] example output). if this is indeed the case, you only have to [list append] the incoming bytes until you encounter "0", and then flush the buffer to the output - this way your packages will always be of the correct length.
I'll try the backtrace and other things you suggest and report back on mrpeach/tcpclient in another email.
it could well be, that it only does not crash with [iemnet/tcpclient] because you haven't parsed the output yet...
fgmads IOhannes
I'll try the backtrace and other things you suggest and report back on mrpeach/tcpclient in another email.
it could well be, that it only does not crash with [iemnet/tcpclient] because you haven't parsed the output yet...
Don't think so - to crash Pd, I wasn't doing any parsing of incoming messages - just sending messages out.
Did a backtrace using mrpeach/tcpclient - on a "freeze" as it didn't actually crash. Got this response:
#0 0x0000000000442623 in clock_unset (x=0x8c5c80) at m_sched.c:70 #1 clock_unset (x=0x8c5c80) at m_sched.c:62 #2 0x000000000044266e in clock_set (x=0x8c5c80, setticks=<optimised out>) at m_sched.c:81 #3 0x00007fffd21cfec1 in tcpclient_child_send (w=0xdec548)
at /home/kiilo/Documents/dev/pd-svn/externals/mrpeach/net/tcpclient.c:380 #4 0x00007ffff7bc4e9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #5 0x00007ffff6ec0ccd in clone () from /lib/x86_64-linux-gnu/libc.so.6 #6 0x0000000000000000 in ?? ()
Will do some more tests later.
Thanks,
It could be that you are overloading Pd with too many messages. If you are wildly moving the slider and [tcpclient] is sending one TCP packet per value you can add messages to the queue faster than they will be sent out and Pd will eventually run out of resources.
Maybe put a [speedlim] after your slider, or pack several values into one message?
Martin
On 2013-07-01 11:53, Iain Mott wrote:
I'll try the backtrace and other things you suggest and report back on mrpeach/tcpclient in another email.
it could well be, that it only does not crash with [iemnet/tcpclient] because you haven't parsed the output yet...
Don't think so - to crash Pd, I wasn't doing any parsing of incoming messages - just sending messages out.
Did a backtrace using mrpeach/tcpclient - on a "freeze" as it didn't actually crash. Got this response:
#0 0x0000000000442623 in clock_unset (x=0x8c5c80) at m_sched.c:70 #1 clock_unset (x=0x8c5c80) at m_sched.c:62 #2 0x000000000044266e in clock_set (x=0x8c5c80, setticks=<optimised out>) at m_sched.c:81 #3 0x00007fffd21cfec1 in tcpclient_child_send (w=0xdec548)
at /home/kiilo/Documents/dev/pd-svn/externals/mrpeach/net/tcpclient.c:380 #4 0x00007ffff7bc4e9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #5 0x00007ffff6ec0ccd in clone () from /lib/x86_64-linux-gnu/libc.so.6 #6 0x0000000000000000 in ?? ()
Will do some more tests later.
Thanks,
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
In my experience, this may bog down pd but it should never crash it. If it does, something else is the problem. On Jul 1, 2013 1:24 PM, "Martin Peach" martin.peach@sympatico.ca wrote:
It could be that you are overloading Pd with too many messages. If you are wildly moving the slider and [tcpclient] is sending one TCP packet per value you can add messages to the queue faster than they will be sent out and Pd will eventually run out of resources.
Maybe put a [speedlim] after your slider, or pack several values into one message?
Martin
On 2013-07-01 11:53, Iain Mott wrote:
I'll try the backtrace and other things you suggest and report back
on mrpeach/tcpclient in another email.
it could well be, that it only does not crash with [iemnet/tcpclient] because you haven't parsed the output yet...
Don't think so - to crash Pd, I wasn't doing any parsing of incoming messages - just sending messages out.
Did a backtrace using mrpeach/tcpclient - on a "freeze" as it didn't actually crash. Got this response:
#0 0x0000000000442623 in clock_unset (x=0x8c5c80) at m_sched.c:70 #1 clock_unset (x=0x8c5c80) at m_sched.c:62 #2 0x000000000044266e in clock_set (x=0x8c5c80, setticks=<optimised out>) at m_sched.c:81 #3 0x00007fffd21cfec1 in tcpclient_child_send (w=0xdec548)
at /home/kiilo/Documents/dev/pd-**svn/externals/mrpeach/net/** tcpclient.c:380 #4 0x00007ffff7bc4e9a in start_thread () from /lib/x86_64-linux-gnu/**libpthread.so.0 #5 0x00007ffff6ec0ccd in clone () from /lib/x86_64-linux-gnu/libc.so.**6 #6 0x0000000000000000 in ?? ()
Will do some more tests later.
Thanks,
______________________________**_________________ Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/** listinfo/pd-list http://lists.puredata.info/listinfo/pd-list
______________________________**_________________ Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/** listinfo/pd-list http://lists.puredata.info/listinfo/pd-list
Hi Martin,
The actual patch I'm using is translating MIDI pitch bend data recorded in Ardour3 (location data encoded as pitchbend for practical purposes), translating it into XML and sending it through to the SSR. It's already limiting the rate to 10 messages every second for each moving source and so far I'm only using 4 sources. This rate, done for testing, is already less than ideal. Each location message sent SSR for a given source looks something like the following:
<request><source id=1"><position x="1.234" y="-0.234"/></source></request>
Does this seem excessive?
Cheers,
Iain
Em Mon, 2013-07-01 às 13:20 -0400, Martin Peach escreveu:
It could be that you are overloading Pd with too many messages. If you are wildly moving the slider and [tcpclient] is sending one TCP packet per value you can add messages to the queue faster than they will be sent out and Pd will eventually run out of resources.
Maybe put a [speedlim] after your slider, or pack several values into one message?
Martin
On 2013-07-01 11:53, Iain Mott wrote:
I'll try the backtrace and other things you suggest and report back on mrpeach/tcpclient in another email.
it could well be, that it only does not crash with [iemnet/tcpclient] because you haven't parsed the output yet...
Don't think so - to crash Pd, I wasn't doing any parsing of incoming messages - just sending messages out.
Did a backtrace using mrpeach/tcpclient - on a "freeze" as it didn't actually crash. Got this response:
#0 0x0000000000442623 in clock_unset (x=0x8c5c80) at m_sched.c:70 #1 clock_unset (x=0x8c5c80) at m_sched.c:62 #2 0x000000000044266e in clock_set (x=0x8c5c80, setticks=<optimised out>) at m_sched.c:81 #3 0x00007fffd21cfec1 in tcpclient_child_send (w=0xdec548)
at /home/kiilo/Documents/dev/pd-svn/externals/mrpeach/net/tcpclient.c:380 #4 0x00007ffff7bc4e9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #5 0x00007ffff6ec0ccd in clone () from /lib/x86_64-linux-gnu/libc.so.6 #6 0x0000000000000000 in ?? ()
Will do some more tests later.
Thanks,
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Forty times a second is relatively slow. Must be something else. I would use wireshark to see what packets are actually going over the wire, especially to see what the last one is. These speeds are probably too fast for [print]ing to the console; that can cause problems. Are you sending to the same machine? If not is WiFi involved? Can you use UDP instead of TCP (for lower overhead and no out-of-order packets)?
Martin
On 2013-07-01 13:58, Iain Mott wrote:
Hi Martin,
The actual patch I'm using is translating MIDI pitch bend data recorded in Ardour3 (location data encoded as pitchbend for practical purposes), translating it into XML and sending it through to the SSR. It's already limiting the rate to 10 messages every second for each moving source and so far I'm only using 4 sources. This rate, done for testing, is already less than ideal. Each location message sent SSR for a given source looks something like the following:
<request><source id=1"><position x="1.234" y="-0.234"/></source></request>
Does this seem excessive?
Cheers,
Iain
Em Mon, 2013-07-01 às 13:20 -0400, Martin Peach escreveu:
It could be that you are overloading Pd with too many messages. If you are wildly moving the slider and [tcpclient] is sending one TCP packet per value you can add messages to the queue faster than they will be sent out and Pd will eventually run out of resources.
Maybe put a [speedlim] after your slider, or pack several values into one message?
Martin
On 2013-07-01 11:53, Iain Mott wrote:
I'll try the backtrace and other things you suggest and report back on mrpeach/tcpclient in another email.
it could well be, that it only does not crash with [iemnet/tcpclient] because you haven't parsed the output yet...
Don't think so - to crash Pd, I wasn't doing any parsing of incoming messages - just sending messages out.
Did a backtrace using mrpeach/tcpclient - on a "freeze" as it didn't actually crash. Got this response:
#0 0x0000000000442623 in clock_unset (x=0x8c5c80) at m_sched.c:70 #1 clock_unset (x=0x8c5c80) at m_sched.c:62 #2 0x000000000044266e in clock_set (x=0x8c5c80, setticks=<optimised out>) at m_sched.c:81 #3 0x00007fffd21cfec1 in tcpclient_child_send (w=0xdec548)
at /home/kiilo/Documents/dev/pd-svn/externals/mrpeach/net/tcpclient.c:380 #4 0x00007ffff7bc4e9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #5 0x00007ffff6ec0ccd in clone () from /lib/x86_64-linux-gnu/libc.so.6 #6 0x0000000000000000 in ?? ()
Will do some more tests later.
Thanks,
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Using iemnet/tcpclient and implementing IOhannes parsing suggestion, my patch is now communicating with SSR without crashing. There is a "bogging down" problem though and testing with just 3 sources, I need to keep the limit at 10 messages/sec for each. It stops working at higher rates but doesn't crash. SSR is running on this local machine and there is no WiFi involved. Unfortunately I don't think UDP is an option with SSR.
SSR also sends XML "level" data to tcpclient constantly for each source in the scene. Perhaps this extra traffic isn't helping. eg.
<update><source id='1' level='-98.5405'/><source id='2' level='-99.8139'/><source id='3' level='-99.6628'/><source id='4' level='-101.127'/></update>
I'll wait to hear back from SSR to see if they have any suggestions.
Cheers and thanks for your help everyone,
Iain
Em Mon, 2013-07-01 às 14:29 -0400, Martin Peach escreveu:
Forty times a second is relatively slow. Must be something else. I would use wireshark to see what packets are actually going over the wire, especially to see what the last one is. These speeds are probably too fast for [print]ing to the console; that can cause problems. Are you sending to the same machine? If not is WiFi involved? Can you use UDP instead of TCP (for lower overhead and no out-of-order packets)?
Martin
On 2013-07-01 13:58, Iain Mott wrote:
Hi Martin,
The actual patch I'm using is translating MIDI pitch bend data recorded in Ardour3 (location data encoded as pitchbend for practical purposes), translating it into XML and sending it through to the SSR. It's already limiting the rate to 10 messages every second for each moving source and so far I'm only using 4 sources. This rate, done for testing, is already less than ideal. Each location message sent SSR for a given source looks something like the following:
<request><source id=1"><position x="1.234" y="-0.234"/></source></request>
Does this seem excessive?
Cheers,
Iain
Em Mon, 2013-07-01 às 13:20 -0400, Martin Peach escreveu:
It could be that you are overloading Pd with too many messages. If you are wildly moving the slider and [tcpclient] is sending one TCP packet per value you can add messages to the queue faster than they will be sent out and Pd will eventually run out of resources.
Maybe put a [speedlim] after your slider, or pack several values into one message?
Martin
On 2013-07-01 11:53, Iain Mott wrote:
I'll try the backtrace and other things you suggest and report back on mrpeach/tcpclient in another email.
it could well be, that it only does not crash with [iemnet/tcpclient] because you haven't parsed the output yet...
Don't think so - to crash Pd, I wasn't doing any parsing of incoming messages - just sending messages out.
Did a backtrace using mrpeach/tcpclient - on a "freeze" as it didn't actually crash. Got this response:
#0 0x0000000000442623 in clock_unset (x=0x8c5c80) at m_sched.c:70 #1 clock_unset (x=0x8c5c80) at m_sched.c:62 #2 0x000000000044266e in clock_set (x=0x8c5c80, setticks=<optimised out>) at m_sched.c:81 #3 0x00007fffd21cfec1 in tcpclient_child_send (w=0xdec548)
at /home/kiilo/Documents/dev/pd-svn/externals/mrpeach/net/tcpclient.c:380 #4 0x00007ffff7bc4e9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #5 0x00007ffff6ec0ccd in clone () from /lib/x86_64-linux-gnu/libc.so.6 #6 0x0000000000000000 in ?? ()
Will do some more tests later.
Thanks,
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Hi Iain.
To be honest, I didn't think about the problem that a message could need more than one packet. It's good to know that iemnet/tcpclient can handle that.
@IOhannes: thanks for the suggestion. And (binary) 0 is indeed the terminating character.
On Mon, Jul 1, 2013 at 10:14 PM, Iain Mott mott@reverberant.com wrote:
Using iemnet/tcpclient and implementing IOhannes parsing suggestion, my patch is now communicating with SSR without crashing. There is a "bogging down" problem though and testing with just 3 sources, I need to keep the limit at 10 messages/sec for each. It stops working at higher rates but doesn't crash. SSR is running on this local machine and there is no WiFi involved. Unfortunately I don't think UDP is an option with SSR.
No sorry, not for now. But feel free to hack into the SSR code!
SSR also sends XML "level" data to tcpclient constantly for each source in the scene. Perhaps this extra traffic isn't helping. eg.
<update><source id='1' level='-98.5405'/><source id='2' level='-99.8139'/><source id='3' level='-99.6628'/><source id='4' level='-101.127'/></update>
I'll wait to hear back from SSR to see if they have any suggestions.
I guess you are talking to me ...
There is one quick and hackish way to avoid the level messages: Go to src/boostnetwork/connection.cpp (around line 102) and remove the line
_subscriber.send_levels();
... and recompile. This should get rid of the annoying "level" messages. The SSR still sends all other messages, but if desired you can disable them in a similar manner.
I'm aware that this isn't a satisfactory long-term solution, but for now it may help.
We have big plans to modularize the network interface of the SSR in a way that different network protocols can be used interchangeably, e.g. WebSockets, FUDI, OSC, ... MIDI could probably also be included there.
In addition, we want to implement a publish-subscribe mechanism (for all protocols which have a back-channel) which allows clients to select the exact amount (and probably rate) of information to receive from the SSR.
However, currently we just don't have the resources to make these changes.
@list: if anyone wants to help feel free to contact us: ssr@spatialaudio.net!
BTW, some advertisement: Did everyone check out the brand new "preview" version of the SSR: http://spatialaudio.net/ssr/download/? It also features multi-threading and the brand new (and still quite experimental) NFC-HOA renderer!
cheers, Matthias
Cheers and thanks for your help everyone,
Iain
Em Mon, 2013-07-01 às 14:29 -0400, Martin Peach escreveu:
Forty times a second is relatively slow. Must be something else. I would use wireshark to see what packets are actually going over the wire, especially to see what the last one is. These speeds are probably too fast for [print]ing to the console; that can cause problems. Are you sending to the same machine? If not is WiFi involved? Can you use UDP instead of TCP (for lower overhead and no out-of-order packets)?
Martin
On 2013-07-01 13:58, Iain Mott wrote:
Hi Martin,
The actual patch I'm using is translating MIDI pitch bend data recorded in Ardour3 (location data encoded as pitchbend for practical purposes), translating it into XML and sending it through to the SSR. It's already limiting the rate to 10 messages every second for each moving source and so far I'm only using 4 sources. This rate, done for testing, is already less than ideal. Each location message sent SSR for a given source looks something like the following:
<request><source id=1"><position x="1.234" y="-0.234"/></source></request>
Does this seem excessive?
Cheers,
Iain
Em Mon, 2013-07-01 às 13:20 -0400, Martin Peach escreveu:
It could be that you are overloading Pd with too many messages. If you are wildly moving the slider and [tcpclient] is sending one TCP packet per value you can add messages to the queue faster than they will be sent out and Pd will eventually run out of resources.
Maybe put a [speedlim] after your slider, or pack several values into one message?
Martin
On 2013-07-01 11:53, Iain Mott wrote:
> I'll try the backtrace and other things you suggest and report back > on mrpeach/tcpclient in another email.
it could well be, that it only does not crash with [iemnet/tcpclient] because you haven't parsed the output yet...
Don't think so - to crash Pd, I wasn't doing any parsing of incoming messages - just sending messages out.
Did a backtrace using mrpeach/tcpclient - on a "freeze" as it didn't actually crash. Got this response:
#0 0x0000000000442623 in clock_unset (x=0x8c5c80) at m_sched.c:70 #1 clock_unset (x=0x8c5c80) at m_sched.c:62 #2 0x000000000044266e in clock_set (x=0x8c5c80, setticks=<optimised out>) at m_sched.c:81 #3 0x00007fffd21cfec1 in tcpclient_child_send (w=0xdec548)
at /home/kiilo/Documents/dev/pd-svn/externals/mrpeach/net/tcpclient.c:380 #4 0x00007ffff7bc4e9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #5 0x00007ffff6ec0ccd in clone () from /lib/x86_64-linux-gnu/libc.so.6 #6 0x0000000000000000 in ?? ()
Will do some more tests later.
Thanks,
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 2013-07-02 17:07, Matthias Geier wrote:
keep the limit at 10 messages/sec for each. It stops working at higher rates but doesn't crash. SSR is running on this local machine and there is no WiFi involved. Unfortunately I don't think UDP is an option with SSR.
No sorry, not for now. But feel free to hack into the SSR code!
btw, i don't think that XML is a very good format for controlling an application real-time. how about adding OSC support to SSR?
fgmasdr IOhannes
Hi IOhannes.
On Tue, Jul 2, 2013 at 5:14 PM, IOhannes m zmoelnig zmoelnig@iem.at wrote:
[...]
btw, i don't think that XML is a very good format for controlling an application real-time.
I know. At the time it seemed nice for an experimental protocol because we would be able to quickly add certain information to existing messages. Also, we thought we could re-use some code from the parsing of scene files. Another idea was that it would be easy to debug with WireShark.
But parsing these XML-network-messages can be a pain-in-the-ass, as Iain experiences right now ...
how about adding OSC support to SSR?
How about reading my e-mail to the end?
just kidding ... I wrote something along these lines:
We have big plans to modularize the network interface of the SSR in a way that different network protocols can be used interchangeably, e.g. WebSockets, FUDI, OSC, ... MIDI could probably also be included there.
In addition, we want to implement a publish-subscribe mechanism (for all protocols which have a back-channel) which allows clients to select the exact amount (and probably rate) of information to receive from the SSR.
However, currently we just don't have the resources to make these changes.
So yes, we would like to add many things to the SSR, but we don't have time for most of it ...
cheers, Matthias
On Die, 2013-07-02 at 17:07 +0200, Matthias Geier wrote:
Hi Iain.
To be honest, I didn't think about the problem that a message could need more than one packet. It's good to know that iemnet/tcpclient can handle that.
It's not that [iemnet/tcpclient] can handle it and [net/iemnet] can't. In fact, with both you have to cook your own mechanism to delimit packets for a packet oriented protocol. With [net/iemnet], however, you have to serialize the data first in order to be able to do that. I see two problems with [net/tcpclient]'s implementation:
you have to serialize the data anyway, so why doesn't the object already do it?
It gives you the false impression of dealing with packets when you in fact are dealing with a stream. It's dangerous because it often looks as if it would be working, but there is no guarantee it will always do. You may receive a packet split into many chunks, or you get a big chunk containing several packets. All those cases are valid from to POV of TCP, but will break your protocol unless you deploy proper delimiting.
Roman
On 2013-07-02 16:13, Roman Haefeli wrote:
On Die, 2013-07-02 at 17:07 +0200, Matthias Geier wrote:
Hi Iain.
To be honest, I didn't think about the problem that a message could need more than one packet. It's good to know that iemnet/tcpclient can handle that.
It's not that [iemnet/tcpclient] can handle it and [net/iemnet] can't. In fact, with both you have to cook your own mechanism to delimit packets for a packet oriented protocol. With [net/iemnet], however, you have to serialize the data first in order to be able to do that. I see two problems with [net/tcpclient]'s implementation:
- you have to serialize the data anyway, so why doesn't the object already do it?
I think I implemented it that way because it seems to be more efficient within Pd to deal with a single list rather than a bunch of floats. But I don't know for sure if that is true.
- It gives you the false impression of dealing with packets when you in fact are dealing with a stream. It's dangerous because it often looks as if it would be working, but there is no guarantee it will always do. You may receive a packet split into many chunks, or you get a big chunk containing several packets. All those cases are valid from to POV of TCP, but will break your protocol unless you deploy proper delimiting.
Well the TCP protocol _is_ splitting a stream into packets. It's not the same as a serial link where you can send bytes one at a time whenever you like. If you try that you will find that the bytes are gathered into packets for you. It might be useful to consider this when thinking about the best way to send the data (e.g. one byte per packet is not efficient, and you might get the false impression that TCP doesn't work very well).
And as I always say, UDP is probably a better choice for what you are trying to do, if it involves real-time control, with UDP you _do_ have control over the packet size.
Martin
On Die, 2013-07-02 at 18:15 -0400, Martin Peach wrote:
On 2013-07-02 16:13, Roman Haefeli wrote:
On Die, 2013-07-02 at 17:07 +0200, Matthias Geier wrote:
Hi Iain.
To be honest, I didn't think about the problem that a message could need more than one packet. It's good to know that iemnet/tcpclient can handle that.
It's not that [iemnet/tcpclient] can handle it and [net/iemnet] can't. In fact, with both you have to cook your own mechanism to delimit packets for a packet oriented protocol. With [net/iemnet], however, you have to serialize the data first in order to be able to do that. I see two problems with [net/tcpclient]'s implementation:
- you have to serialize the data anyway, so why doesn't the object already do it?
I think I implemented it that way because it seems to be more efficient within Pd to deal with a single list rather than a bunch of floats. But I don't know for sure if that is true.
Yeah, this is what I assume as well, though I don't have any data to back up that assumption. On the other hand, serializing might be cheaper on the C side than in Pd. I just cannot think of a use case where you actually want to directly use those chunks as received. Either you need to delimit packets, then you need to serialize the data, or you actually need a stream, but then you also need to serialize the data.
- It gives you the false impression of dealing with packets when you in fact are dealing with a stream. It's dangerous because it often looks as if it would be working, but there is no guarantee it will always do. You may receive a packet split into many chunks, or you get a big chunk containing several packets. All those cases are valid from to POV of TCP, but will break your protocol unless you deploy proper delimiting.
Well the TCP protocol _is_ splitting a stream into packets. It's not the same as a serial link where you can send bytes one at a time whenever you like. If you try that you will find that the bytes are gathered into packets for you. It might be useful to consider this when thinking about the best way to send the data (e.g. one byte per packet is not efficient, and you might get the false impression that TCP doesn't work very well).
Since the TCP stack is free to packetize the data in the most efficient way, the application layer does not have to, or even _must_ not consider chunk size of data. This should be handled transparently by the transport layer. When the application transmits thousands of 1-byte chunks, the TCP layer will most likely send them as bigger chunks over the network. From the application's point of view, TCP is _exactly_ behaving like a serial link (just without a clock).
And as I always say, UDP is probably a better choice for what you are trying to do, if it involves real-time control, with UDP you _do_ have control over the packet size.
Agreed.
Roman
There is one quick and hackish way to avoid the level messages: Go to src/boostnetwork/connection.cpp (around line 102) and remove the line
_subscriber.send_levels();
... and recompile. This should get rid of the annoying "level" messages.
Many thanks Matthias - it did help ease the congestion a bit.
We have big plans to modularize the network interface of the SSR in a way that different network protocols can be used interchangeably, e.g. WebSockets, FUDI, OSC, ...
[...]
In addition, we want to implement a publish-subscribe mechanism (for all protocols which have a back-channel) which allows clients to select the exact amount (and probably rate) of information to receive from the SSR.
[...]
@list: if anyone wants to help feel free to contact us: ssr@spatialaudio.net!
I hope you get some takers. The spatial audio processing in SSR is really brilliant. It certainly a project well worth supporting!!!
Cheers,
Iain
On Mon, 2013-07-01 at 13:20 -0400, Martin Peach wrote:
It could be that you are overloading Pd with too many messages. If you are wildly moving the slider and [tcpclient] is sending one TCP packet per value you can add messages to the queue faster than they will be sent out and Pd will eventually run out of resources.
There seem to exist different approaches to address this problem. If I'm not mistaken, [netsend] uses a fixed buffer and when filled it blocks Pd until the buffer gets emptied. [iemnet/tcp[client|send]] allocates just as much RAM as it needs and does not block Pd. If [net/tcpclient] really is designed to crash Pd when the buffer gets full, then I would think this is the least desirable behavior of all three.
@Iain If it really crashes due to network saturation can easily be verified by testing the same patch with [iemnet/tcpclient]. If you throw more messages at it than it can actually transmit, you would notice an increasing lag on the other end.
Roman
On 2013-07-01 11:53, Iain Mott wrote:
I'll try the backtrace and other things you suggest and report back on mrpeach/tcpclient in another email.
it could well be, that it only does not crash with [iemnet/tcpclient] because you haven't parsed the output yet...
Don't think so - to crash Pd, I wasn't doing any parsing of incoming messages - just sending messages out.
Did a backtrace using mrpeach/tcpclient - on a "freeze" as it didn't actually crash. Got this response:
#0 0x0000000000442623 in clock_unset (x=0x8c5c80) at m_sched.c:70 #1 clock_unset (x=0x8c5c80) at m_sched.c:62 #2 0x000000000044266e in clock_set (x=0x8c5c80, setticks=<optimised out>) at m_sched.c:81 #3 0x00007fffd21cfec1 in tcpclient_child_send (w=0xdec548)
at /home/kiilo/Documents/dev/pd-svn/externals/mrpeach/net/tcpclient.c:380 #4 0x00007ffff7bc4e9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #5 0x00007ffff6ec0ccd in clone () from /lib/x86_64-linux-gnu/libc.so.6 #6 0x0000000000000000 in ?? ()
Will do some more tests later.
Thanks,
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list