Hi all,
I've been trying to port my udp-based network communication to tcp-based and in a large ensemble netclient/netserver seemed like the best option. However, now that I've tested it in a couple sessions I am finding that every time I broadcast more than let's say dozen lines of text (coll data) which should be still less than a couple kb, I get nasty xruns (running through jack/linux). I am wondering what is causing this? Isn't netserver running in a separate thread?
Any ideas?
Any thoughts are most appreciated.
Best wishes,
Ico
Hi Ivica
From what I know, there is an internal buffer of ~4kB for the sending
sockets in both netclient and netserver (I can't recall whether this buffer is built into the externals or is part of the network subsystem of the OS). If that limit is hit, the Pd process is blocked until that buffer is emptied again (which leads obviously to audio drop-outs).
IMHO, this is a design flaw, which all sending net-externals suffer from. There is no way to check the state of this buffer, which would be required in order to avoid a buffer overrun. It would be sufficient to get notified only, when the buffer is completely emptied, so that you could design your patch in a way that it would only send the next message when the previous one is through.
Currently, your best bet is not to send too big chunks in zero logical time, but try to distribute the load over time (a.k.a [drip]ping the messages every few ms), so that you don't hit the limit. However, this only works, if you know beforehand what your network connection can handle. If there are irregularities in the network, you still might trigger a buffer overrun.
In my experience, I never had x-runs in Jack because of that, though, but only audio drop-outs in Pd. This might be due the fact that I'm running both Pd and jackd with -rt while giving the jackd process an even higher priority (drop-outs almost never trigger x-runs in jackd on that system).
Roman
On Tue, 2010-03-23 at 19:39 -0400, Ivica Ico Bukvic wrote:
Hi all,
I've been trying to port my udp-based network communication to tcp-based and in a large ensemble netclient/netserver seemed like the best option. However, now that I've tested it in a couple sessions I am finding that every time I broadcast more than let's say dozen lines of text (coll data) which should be still less than a couple kb, I get nasty xruns (running through jack/linux). I am wondering what is causing this? Isn't netserver running in a separate thread?
Any ideas?
Any thoughts are most appreciated.
Best wishes,
Ico
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
reduzierer wrote:
From what I know, there is an internal buffer of ~4kB for the sending
sockets in both netclient and netserver (I can't recall whether this buffer is built into the externals or is part of the network subsystem of the OS). If that limit is hit, the Pd process is blocked until that buffer is emptied again (which leads obviously to audio drop-outs).
IMHO, this is a design flaw, which all sending net-externals suffer from. There is no way to check the state of this buffer, which would be required in order to avoid a buffer overrun. It would be sufficient to get notified only, when the buffer is completely emptied, so that you could design your patch in a way that it would only send the next message when the previous one is through.
IMHO it's not a design flaw, but part of TCP's design philosophy that tries to send packets until they are received. In the original concept, bombs would be dropping all over the countryside, destroying cables and data centres willy-nilly, while messages could still get through to the missile silos and AA gunners.
UDP was designed to send and forget.
If you are "broadcasting" in TCP you are actually sending separate messages to each recipient, with the OS providing overhead for each one until it has been acknowledged by the recipient. Obviously it's easy to do a DOS attack this way even on your own machine simply by sending faster than the receiver can process the packets.
Broadcasting in UDP sends a single packet to a single address that the router sends to every machine on the subnet. The OS discards the buffer as soon as it is sent, so you can't overload the stack, although you can always peg the CPU trying.
Martin
On Wed, 2010-03-24 at 15:58 +0000, martin.peach@sympatico.ca wrote:
reduzierer wrote:
From what I know, there is an internal buffer of ~4kB for the sending
sockets in both netclient and netserver (I can't recall whether this buffer is built into the externals or is part of the network subsystem of the OS). If that limit is hit, the Pd process is blocked until that buffer is emptied again (which leads obviously to audio drop-outs).
IMHO, this is a design flaw, which all sending net-externals suffer from. There is no way to check the state of this buffer, which would be required in order to avoid a buffer overrun. It would be sufficient to get notified only, when the buffer is completely emptied, so that you could design your patch in a way that it would only send the next message when the previous one is through.
IMHO it's not a design flaw, but part of TCP's design philosophy that tries to send packets until they are received. In the original concept, bombs would be dropping all over the countryside, destroying cables and data centres willy-nilly, while messages could still get through to the missile silos and AA gunners.
The fact that those externals/objects block the Pd process has nothing to do with TCP's design philosophy. I am no expert in this field, but I know of other implementations in other programming languages, that handle such situations more gracefully, for instance the python twistd server, which the new netpd-server is built upon. Imagine an Apache server being completely blocked, because one of its clients refuses to receive the webpage quickly enough. But this is the current situation with the Pd net externals. Don't get me wrong, there is no point in rebuilding Apache in Pd, nor am I demanding from anyone that the situation needs to be changed. Those net externals generally are very useful and cover a wide range of applications, but they fail also in other situations and that has _nothing_ to do with TCP's design, they fail because the implementation of those objects is not designed to handle those situations.
UDP was designed to send and forget.
Yeah, which makes the implementation of high-level objects/functions/externals probably much easier (I imagine, but I don't really know).
Roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
reduzierer wrote:
From what I know, there is an internal buffer of ~4kB for the sending
sockets in both netclient and netserver (I can't recall whether this buffer is built into the externals or is part of the network subsystem of the OS). If that limit is hit, the Pd process is blocked until that buffer is emptied again (which leads obviously to audio drop-outs).
IMHO, this is a design flaw, which all sending net-externals suffer from. There is no way to check the state of this buffer, which would be required in order to avoid a buffer overrun. It would be sufficient to get notified only, when the buffer is completely emptied, so that you could design your patch in a way that it would only send the next message when the previous one is through.
IMHO it's not a design flaw, but part of TCP's design philosophy that tries to send packets until they are received. In the original concept, bombs would be dropping all over the countryside, destroying cables and data centres willy-nilly, while messages could still get through to the missile silos and AA gunners.
The fact that those externals/objects block the Pd process has nothing to do with TCP's design philosophy. I am no expert in this field, but I know of other implementations in other programming languages, that handle such situations more gracefully, for instance the python twistd server, which the new netpd-server is built upon.
Try the latest version. I think what was blocking Pd was that it was trying to print thousands of error messages, which is not one of Pd's strong points. The Pd process blocked because [tcpserver] kept trying to send packets and then print an error message whenever it failed. The new version stops trying to send when that happens, until it gets unblocked. I tried it and it works: it stops sending when it can't create sender threads, and then starts again gracefully when it is manually unblocked. All the intervening data is lost of course, but there is a "blocked" message emitted through the status outlet so the Pd patch can use a delay or something to to restart the server.
Imagine an Apache server being completely blocked, because one of its clients refuses to receive the webpage quickly enough.
It's not the same thing, Apache doesn't send huge numbers of messages to clients that don't request them. If a client is dead, a single thread hangs until it times out. It won't try sending anything unless it is asked to.
The application you are using seems more like a video streaming server, which usually use UDP or something similar, which zero handshaking so nothing hangs. And the receiver sees video that's usually choppy and glitchy because of that.
But this is the current situation with the Pd net externals.
Which ones don't work? Only the TCP ones?
Don't get me wrong, there is no point in rebuilding Apache in Pd, nor am I demanding from anyone that the situation needs to be changed. Those net externals generally are very useful and cover a wide range of applications, but they fail also in other situations and that has _nothing_ to do with TCP's design,
Situations like what?
they fail because the implementation of those objects is not designed to handle those situations.
Without knowing What those situations are, I can't say.
Martin
On Wed, 2010-03-24 at 17:23 +0000, martin.peach@sympatico.ca wrote:
reduzierer wrote:
From what I know, there is an internal buffer of ~4kB for the sending
sockets in both netclient and netserver (I can't recall whether this buffer is built into the externals or is part of the network subsystem of the OS). If that limit is hit, the Pd process is blocked until that buffer is emptied again (which leads obviously to audio drop-outs).
IMHO, this is a design flaw, which all sending net-externals suffer from. There is no way to check the state of this buffer, which would be required in order to avoid a buffer overrun. It would be sufficient to get notified only, when the buffer is completely emptied, so that you could design your patch in a way that it would only send the next message when the previous one is through.
IMHO it's not a design flaw, but part of TCP's design philosophy that tries to send packets until they are received. In the original concept, bombs would be dropping all over the countryside, destroying cables and data centres willy-nilly, while messages could still get through to the missile silos and AA gunners.
The fact that those externals/objects block the Pd process has nothing to do with TCP's design philosophy. I am no expert in this field, but I know of other implementations in other programming languages, that handle such situations more gracefully, for instance the python twistd server, which the new netpd-server is built upon.
Try the latest version. I think what was blocking Pd was that it was trying to print thousands of error messages, which is not one of Pd's strong points. The Pd process blocked because [tcpserver] kept trying to send packets and then print an error message whenever it failed. The new version stops trying to send when that happens, until it gets unblocked. I tried it and it works: it stops sending when it can't create sender threads, and then starts again gracefully when it is manually unblocked. All the intervening data is lost of course, but there is a "blocked" message emitted through the status outlet so the Pd patch can use a delay or something to to restart the server.
Ok. I'll check that out, also IOhannes' modified versions.
Imagine an Apache server being completely blocked, because one of its clients refuses to receive the webpage quickly enough.
It's not the same thing, Apache doesn't send huge numbers of messages to clients that don't request them. If a client is dead, a single thread hangs until it times out. It won't try sending anything unless it is asked to.
Hm.. when a client requests a file, the file is being delivered without blocking other processes, even if the requesting client suddenly disappears (then only the specific sending socket's thread is halted). That is what I meant with 'gracefully'. This not the case in Pd's net objects. It doesn't matter at this level, whether the data was requested or not.
The application you are using seems more like a video streaming server, which usually use UDP or something similar, which zero handshaking so nothing hangs. And the receiver sees video that's usually choppy and glitchy because of that.
Yeah, in the case of netpd it is true, that the application is about sending data streams to many clients. UDP wouldn't work, since the system relies on correct order and completeness. Requesting every little piece of data first, doesn't make sense to me, since this would virtually re-implementing the tasks of TCP.
But this is the current situation with the Pd net externals.
Which ones don't work? Only the TCP ones?
Sorry for not being precise. I actually only tested the TCP ones. And yes, all the ones that create sending sockets suffer from that problem.
Don't get me wrong, there is no point in rebuilding Apache in Pd, nor am I demanding from anyone that the situation needs to be changed. Those net externals generally are very useful and cover a wide range of applications, but they fail also in other situations and that has _nothing_ to do with TCP's design,
Situations like what?
Generally this applies to all situations, where you're doing DSP (or any other deterministic realtime stuff) while performing operations, that cannot happen instantly or don't fit into that deterministic scheme, like 'save this multi-megabyte file now' or 'send those lines of data over a socket now' or 'calculate those seven million numbers now'. If the 'now' cannot happen just now, but needs some more real time, this deterministic scheme needs to be broken up, so that logical time can match real time. My idea was that net objects give some feedback about whether they can send data 'now' or not. This would provide a solution for 'such' situations. As I said this applies not only to net objects, but also to file operations from [textfile], [soundfiler] etc and probably other stuff as well. So those situations arise always, when DSP is turned on and and something wants to be executed in 0 logical time, that needs more than <audio buffer size> real time. My guess was that Ivica was describing such a situation.
they fail because the implementation of those objects is not designed to handle those situations.
Without knowing What those situations are, I can't say.
I hope I could make myself clear. Excuse my clumsiness in describing the situation.
Roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
On 2010-03-24 00:39, Ivica Ico Bukvic wrote:
Hi all,
I've been trying to port my udp-based network communication to tcp-based and in a large ensemble netclient/netserver seemed like the best option. However, now that I've tested it in a couple sessions I am finding that every time I broadcast more than let's say dozen lines of text (coll data) which should be still less than a couple kb, I get nasty xruns (running through jack/linux). I am wondering what is causing this? Isn't netserver running in a separate thread?
Any ideas?
use mrpeach's [tcpserver]/[tcpclient].
fmasdr IOhannes
PS: for what it is worth: i have forked mrpeach/net yesterday, with the aim to provide a simple (simpler than mrpeach's objects) high-performance (on my loopback device i was able to do about 600MBit/s read and write with Pd) without all the legacy encumberments of the original objects. (the plan is to simplify the api a little bit)
currently it is still crashy when it comes to disconnecting servers. find it at iem/iemnet
On Wed, 2010-03-24 at 16:13 +0100, IOhannes m zmoelnig wrote:
On 2010-03-24 00:39, Ivica Ico Bukvic wrote:
Hi all,
I've been trying to port my udp-based network communication to tcp-based and in a large ensemble netclient/netserver seemed like the best option. However, now that I've tested it in a couple sessions I am finding that every time I broadcast more than let's say dozen lines of text (coll data) which should be still less than a couple kb, I get nasty xruns (running through jack/linux). I am wondering what is causing this? Isn't netserver running in a separate thread?
Any ideas?
use mrpeach's [tcpserver]/[tcpclient].
I guess they suffer from the very same problem that I wrote about in my last post. However, it would be still interesting to see, whether they make any difference for Ivica's setup.
fmasdr IOhannes
PS: for what it is worth: i have forked mrpeach/net yesterday, with the aim to provide a simple (simpler than mrpeach's objects) high-performance (on my loopback device i was able to do about 600MBit/s read and write with Pd) without all the legacy encumberments of the original objects. (the plan is to simplify the api a little bit)
Yeah, I saw it in #dataflow. Are you also planning to address the buffer overrun problem?
Roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
use mrpeach's [tcpserver]/[tcpclient].
I guess they suffer from the very same problem that I wrote about in my last post. However, it would be still interesting to see, whether they make any difference for Ivica's setup.
I just committed a possible fix: tcpserver stops sending when it can't create any more threads, until it receives an [unblock( message.
I seriously think that you should be using UDP if you want to broadcast. Make your own handshake mechanism in UDP. Don't expect TCP to do things it wasn't made for.
Martin
zmoelnig wrote:
PS: for what it is worth: i have forked mrpeach/net yesterday, with the aim to provide a simple (simpler than mrpeach's objects) high-performance (on my loopback device i was able to do about 600MBit/s read and write with Pd) without all the legacy encumberments of the original objects. (the plan is to simplify the api a little bit)
currently it is still crashy when it comes to disconnecting servers. find it at iem/iemnet
If you're going to do that, I think you should change their names, otherwise confusion will reign.
Martin
On 2010-03-24 17:08, martin.peach@sympatico.ca wrote:
zmoelnig wrote:
PS: for what it is worth: i have forked mrpeach/net yesterday, with the aim
If you're going to do that, I think you should change their names, otherwise confusion will reign.
the names are just perfect :-)
on the long run, i hope to make them compatible with your original objects. at the same time i don't mind at all if improvements are backported. for now a fork just seemed the best way to allow more rapid development without having to think about legacy issues.
for now things look rather promising: a [tcpserver] that merely reflects incoming data back to a client (currently a "netcat" instance) over a realy wire took about 1.4min to reflect 500MB of data, whereas the original objects took about 53.5min (no manual tuning with buffer sizes in any test)
fmadrs IOhannes
On Mar 25, 2010, at 4:26 AM, IOhannes m zmoelnig wrote:
On 2010-03-24 17:08, martin.peach@sympatico.ca wrote:
zmoelnig wrote:
PS: for what it is worth: i have forked mrpeach/net yesterday, with
the aimIf you're going to do that, I think you should change their names,
otherwise confusion will reign.the names are just perfect :-)
Perhaps, but they are also taken. I think its bad form to reuse the
name if you plan on maintaining them as separate objects. That's
certainly one topic we've discussed to death back in the day. If its
a dev branch, then no, but that should then be a branch in SVN.
.hc
on the long run, i hope to make them compatible with your original objects. at the same time i don't mind at all if improvements are backported. for now a fork just seemed the best way to allow more rapid
development without having to think about legacy issues.for now things look rather promising: a [tcpserver] that merely reflects incoming data back to a client (currently a "netcat" instance) over a realy wire took about 1.4min to reflect 500MB of data, whereas the original objects took about 53.5min (no manual tuning with buffer sizes in any test)
fmadrs IOhannes
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
I have always wished for my computer to be as easy to use as my
telephone; my wish has come true because I can no longer figure out
how to use my telephone." --Bjarne Stroustrup (creator of C++)
Hi IOhannes
I've troubles compiling the iemnet external tcpsend:
$ make cc -DPD -I../../../pd/src -Wall -W -g -fPIC -O6 -funroll-loops -fomit-frame-pointer -o "tcpsend.o" -c "tcpsend.c" In file included from tcpsend.c:26: iemnet.h: In function ‘debug_dummy’: iemnet.h:117: warning: unused parameter ‘format’ tcpsend.c: In function ‘tcpsend_connect’: tcpsend.c:75: warning: implicit declaration of function ‘fprintf’ tcpsend.c:75: warning: incompatible implicit declaration of built-in function ‘fprintf’ tcpsend.c:75: error: ‘stderr’ undeclared (first use in this function) tcpsend.c:75: error: (Each undeclared identifier is reported only once tcpsend.c:75: error: for each function it appears in.) tcpsend.c: In function ‘tcpsend_send’: tcpsend.c:116: warning: unused parameter ‘s’ make: *** [tcpsend.o] Error 1
Am I doing something wrong?
Roman
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Roman Haefeli wrote:
Hi IOhannes
I've troubles compiling the iemnet external tcpsend:
$ make cc -DPD -I../../../pd/src -Wall -W -g -fPIC -O6 -funroll-loops -fomit-frame-pointer -o "tcpsend.o" -c "tcpsend.c" In file included from tcpsend.c:26:
it should compile now. however, i have introduced a performance hog due to synchronization issues (should be fixed during the weekend)
gmsd IOhannes
On Fri, 2010-03-26 at 15:34 +0100, IOhannes m zmölnig wrote:
it should compile now. however, i have introduced a performance hog due to synchronization issues (should be fixed during the weekend)
It compiled, thanks.
From what I understood from the talk in #dataflow, I thought that there
is some feedback about the sender queue size, whenever you send a message with [iemnet/tcpserver]. However, I only see the old message 'sent 1 15 8', which doesn't seem to reflect the actual queue size nor the actual number of bytes already transmitted. Did I understand something wrong?
Roman
On Sun, 2010-03-28 at 15:57 +0200, Roman Haefeli wrote:
On Fri, 2010-03-26 at 15:34 +0100, IOhannes m zmölnig wrote:
it should compile now. however, i have introduced a performance hog due to synchronization issues (should be fixed during the weekend)
It compiled, thanks.
From what I understood from the talk in #dataflow, I thought that there
is some feedback about the sender queue size, whenever you send a message with [iemnet/tcpserver].
Which seems to be the case.
However, I only see the old message 'sent 1 15 8', which doesn't seem to reflect the actual queue size nor the actual number of bytes already transmitted.
Sorry for the noise. Actually the second number _does_ reflect the size of the current buffer. I was surprised to see this message appearing in 0 logical time after sending something, but then it seems logical, that the queue is <number-of-bytes> just right after having sent <number-of-bytes>. A quick test sending some messages to another box, then pulling the ethernet plug off from that other box revealed, that the second number of the 'sent' message indeed is rising when sending some more messages.
I actually should be quiet, because I haven't performed any further testing, but this really excites me, since it finally allows to create patches that use the network _without_ interrupting audio.
That's a few small lines of code for a man, one giant leap for Pd community.
Roman
Ok, I tried now working with the tcpserver/client model and am unable to solve following problem:
when a tcpclient (or server) outputs a long string, it splits is for some reason into two separate lines (perhaps that is how the buffer wraps around?). Adjusting buf size makes no difference. I tried using bytes2any and any2bytes but those messages are unable to join lines that have been separated while being sent from one object to another (even though they have been separated at a point where no separator has been added to the stream). What am I missing?
BTW, I am using 0.42.5 extended version of these objects. On a sidenote, in this version tcpserver has apparently an out-of-order output out of its outlets with the third preceding fourth (instead of the other way around).
Best wishes,
Ico
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Ivica Ico Bukvic wrote:
Ok, I tried now working with the tcpserver/client model and am unable to solve following problem:
when a tcpclient (or server) outputs a long string, it splits is for some reason into two separate lines (perhaps that is how the buffer
the reason is, that tcp/ip works like this. it is a stream based protocol, that is: message agnostic. bytes are guaranteed one after each other, but there is no delimiter between bytes. if you need a delimiter, you have to transmit one (FUDI uses ";" as a message delimiter; OSC-over-TCP transmits the message-length, so the receiver can repratition the stream accordingly,
fg,sdf IOhannes