Speaking of tcpserver.c isn't the tcpserver_notify() function making an unsafe outlet_float() call when it is being triggered by an external event (disconnection) which can happen out-of-order and thus potentially causing crashes?
Shouldn't this be wrapped also in a clock_delay()?
Best wishes,
Ico
Ivica Ico Bukvic wrote:
Speaking of tcpserver.c isn't the tcpserver_notify() function making an unsafe outlet_float() call when it is being triggered by an external event (disconnection) which can happen out-of-order and thus potentially causing crashes?
Shouldn't this be wrapped also in a clock_delay()?
No, it's called from a poll routine that Pd calls at a safe time. The disconnect happens asynchronously but Pd checks for activity on that socket with a select() call, which, if it detects a disconnect, results in tcpserver_notify() being called.
Martin
Martin,
I am looking into further improving tcpserver/tcpclient as we've encountered a number of issues while using it with L2Ork. I am aware that we are currently using somewhat older version (0.42.5) plus the patches I forwarded to you so please take this into account when reading the following observations.
There are 3 issues I can think of off top of my head:
1) when closing the patch with clients connected I get stuck port on a regular basis even though as far as I could see in your source you've specified the flag that should allow for it to be freed near instantaneously provided it is stuck in TIME_WAIT mode, namely:
933: if (setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, 0, 0) < 0) post("setsockopt failed\n");
On Ubuntu 9.10 one has to wait for 60 seconds (default system setting) for the port to be freed. I've tried customizing port timeout system-wide but that had no effect which appears to be Ubuntu issue. Still, I am trying to figure out why the external is not closing the port cleanly. The end-result is that I effectively have to wait up to 60 seconds between different pieces in order to be able to spawn a new conductor instance (basically a focal intercommunication patch that deals with time-sensitive control data). Following link suggests that perhaps the socket is stuck in a different mode upon close which makes the aforesaid code irrelevant, thus suggesting that destructor may not be doing things quite right: http://www.unixguide.net/network/socketfaq/4.5.shtml
2) when using broadcast option, it *consistently* causes audio xruns even though your iteration appears to be heavily threaded. This has been such a large problem I had to design a coll-based iteration that keeps count of active connections and their associated sockets and then using a metro dispatching messages to individual clients at 5ms intervals by prefixing them with appropriate socket number until coll list is exhausted. This is a terrible solution if one wishes to keep network jitter to a minimum between clients (actually it is a terrible solution no matter what) as in this case in a 16-member ensemble, last member can be receiving message as late as 16*5 or 90ms (plus obviously inherent network latency). Anything less than 5ms between sends yields unstable results.
3) at times tcpserver misreports number of connections and tends to crash, almost as if it fails to properly detect disconnection.
Any ideas what might be the problem and whether any of these problems have been addressed in the latest version?
Best wishes,
Ivica Ico Bukvic, D.M.A. Composition, Music Technology Director, DISIS Interactive Sound & Intermedia Studio Director, L2Ork Linux Laptop Orchestra Assistant Co-Director, CCTAD CHCI, CS, and Art (by courtesy) Virginia Tech Dept. of Music - 0240 Blacksburg, VA 24061 (540) 231-6139 (540) 231-5034 (fax) ico@vt.edu http://www.music.vt.edu/faculty/bukvic/
Ivica Ico Bukvic wrote:
Martin,
I am looking into further improving tcpserver/tcpclient as we've encountered a number of issues while using it with L2Ork. I am aware that we are currently using somewhat older version (0.42.5) plus the patches I forwarded to you so please take this into account when reading the following observations.
There are 3 issues I can think of off top of my head:
- when closing the patch with clients connected I get stuck port on a regular basis even though as far as I could see in your source you've specified the flag that should allow for it to be freed near instantaneously provided it is stuck in TIME_WAIT mode, namely:
933: if (setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, 0, 0) < 0) post("setsockopt failed\n");
On Ubuntu 9.10 one has to wait for 60 seconds (default system setting) for the port to be freed. I've tried customizing port timeout system-wide but that had no effect which appears to be Ubuntu issue. Still, I am trying to figure out why the external is not closing the port cleanly. The end-result is that I effectively have to wait up to 60 seconds between different pieces in order to be able to spawn a new conductor instance (basically a focal intercommunication patch that deals with time-sensitive control data). Following link suggests that perhaps the socket is stuck in a different mode upon close which makes the aforesaid code irrelevant, thus suggesting that destructor may not be doing things quite right: http://www.unixguide.net/network/socketfaq/4.5.shtml
OK, thanks for finding that. I fixed it in svn, the way it is written above will not set SO_REUSEADDR because the parameter is false (0). Also it was #ifdeffed out except for IRIX. You should try an autobuild from tomorrow.
- when using broadcast option, it *consistently* causes audio xruns even though your iteration appears to be heavily threaded. This has been such a large problem I had to design a coll-based iteration that keeps count of active connections and their associated sockets and then using a metro dispatching messages to individual clients at 5ms intervals by prefixing them with appropriate socket number until coll list is exhausted. This is a terrible solution if one wishes to keep network jitter to a minimum between clients (actually it is a terrible solution no matter what) as in this case in a 16-member ensemble, last member can be receiving message as late as 16*5 or 90ms (plus obviously inherent network latency). Anything less than 5ms between sends yields unstable results.
Yes, that's because in tcp broadcast it sends individual packets to each client, which can eat up time. I suggest using udp for data and tcp for control, as udp broadcast will send one packet to everyone on the subnet.
- at times tcpserver misreports number of connections and tends to crash, almost as if it fails to properly detect disconnection.
I think that was fixed a few weeks ago.
Any ideas what might be the problem and whether any of these problems have been addressed in the latest version?
Not sure which version you have, but try it now: http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeac...
Martin
OK, thanks for finding that. I fixed it in svn, the way it is written above will not set SO_REUSEADDR because the parameter is false (0). Also it was #ifdeffed out except for IRIX. You should try an autobuild from tomorrow.
I'll definitely try it out but I am not convinced this is the problem as my port timeout adjustments system-side had no effect suggesting that the port was stuck in a different mode than TIME_WAIT. Could it be something wrong with the destructor?
Yes, that's because in tcp broadcast it sends individual packets to each client, which can eat up time. I suggest using udp for data and tcp for control, as udp broadcast will send one packet to everyone on the subnet.
I understand that but we cannot use UDP as that one at times does not go through when there is a lot of traffic (and we do use UDP for non-critical monitoring in addition to TCP). Last thing I want to have happen is someone missing a critical cue due to dropped UDP packet (which in our case happens a lot--before we switched to TCP I had to press button for the same cue 3-4 times before it was actually received by everyone).
Couldn't the broadcast command spawn a separate thread that then services all clients to avoid xruns?
Not sure which version you have, but try it now: http://pure-data.svn.sourceforge.net/viewvc/pure- data/trunk/externals/mrpeach/net/tcpserver.c?view=log
Will do.
Many thanks for your help!
Best wishes,
Ico
Ivica Ico Bukvic wrote:
Couldn't the broadcast command spawn a separate thread that then services all clients to avoid xruns?
It might be better just to have it send the same buffer instead of repeatedly parsing the same input for each send (unless it's the thread creation itself that causes glitches). I'll look into it.
Martin
Martin,
Just tried the latest version and the problem with stuck port persists, suggesting that either Ubuntu is doing something funny or that the destructor is to blame.
A simple test to perform is to use attached test patch:
1) Connect the client 2) Close the patch without explicitly disconnecting the client 3) Reopen the patch and the tcpserver will fail to be created until the timeout has taken place (on my machine 60 seconds)
Console output in this case is:
tcpserver listening on port 9999 tcpclient 2010 Martin Peach-style tcpclient: connecting socket 8 to port 9999 tcpserver: accepted connection from 127.0.0.1 on socket 9 tcp_server_free... ...tcp_server_free tcpclient_free... tcpclient: disconnected ...tcpclient_free
However, if one explicitly disconnects before closing the patch, that works just fine:
tcpserver listening on port 9999 tcpclient 2010 Martin Peach-style tcpclient: connecting socket 8 to port 9999 tcpserver: accepted connection from 127.0.0.1 on socket 9 tcpclient: disconnected tcpserver: connection closed on socket 9 tcpserver: "127.0.0.1" removed from list of clients tcp_server_free... ...tcp_server_free tcpclient_free... tcpclient: not connected ...tcpclient_free
Notice how in the first case there is never a "disconnect" taking place. Hence the problem.
In production environment it would be silly to require everyone to explicitly disconnect when closing the patch. IMO these things should happen automagically when closing the patch.
Best wishes,
Ico
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Ivica Ico Bukvic wrote:
Martin,
Just tried the latest version and the problem with stuck port persists, suggesting that either Ubuntu is doing something funny or that the destructor is to blame.
if you quit Pd, the destructors are not properly called for externals. this is a rather serious issue reported on the sf-tracker soome years(?) ago, but a fix never made it into Pd proper.
fgmasr IOhannes
Ivica Ico Bukvic wrote:
Martin,
Just tried the latest version and the problem with stuck port persists, suggesting that either Ubuntu is doing something funny or that the destructor is to blame.
if you quit Pd, the destructors are not properly called for externals. this is a rather serious issue reported on the sf-tracker soome years(?) ago, but a fix never made it into Pd proper.
I am not quitting Pd but only closing the patch (not sure if that is the same thing). If so, is there fix so we could at least commit it on our end?
Best wishes,
Ico
if you quit Pd, the destructors are not properly called for externals. this is a rather serious issue reported on the sf-tracker soome years(?) ago, but a fix never made it into Pd proper.
Also, in this case I don't think this is the problem as the verbose output shows that the free function was called properly on both client and server, it is just that the server in its current state is apparently unable to free the port if it still has clients connected to it.
Ico
Ivica Ico Bukvic wrote:
Martin,
Just tried the latest version and the problem with stuck port persists, suggesting that either Ubuntu is doing something funny or that the destructor is to blame.
A simple test to perform is to use attached test patch:
- Connect the client
- Close the patch without explicitly disconnecting the client
- Reopen the patch and the tcpserver will fail to be created until the
timeout has taken place (on my machine 60 seconds)
Hmmm, I just tried this on WinXP and it works fine (Pd version 0.42.5-extended-20100411 but with the new tcpserver). I can even open multiple copies of the patch without errors or failure to create. I'll check later on a debian machine.
Martin
Hmmm, I just tried this on WinXP and it works fine (Pd version 0.42.5-extended-20100411 but with the new tcpserver). I can even open multiple copies of the patch without errors or failure to create. I'll check later on a debian machine.
Martin
I just noticed you patched the source earlier today to fix stuck open socket--many thanks for doing this! It appears Linux does not like use of "char" for optVal in the setsockopt call but rather prefers int. Since the rest of the code treats optVal as an int anyhow this should provide an universal fix without having to also generate optLen or ifdefs. Can you test this on Windows as an alternative as that would make the code cleaner? In other words, keeping the old call and simply changing optVal to int as it is in its current version.
Best wishes,
Ico
Ico wrote:
I just noticed you patched the source earlier today to fix stuck open socket--many thanks for doing this! It appears Linux does not like use of "char" for optVal in the setsockopt call but rather prefers int. Since the rest of the code treats optVal as an int anyhow this should provide an universal fix without having to also generate optLen or ifdefs. Can you test this on Windows as an alternative as that would make the code cleaner? In other words, keeping the old call and simply changing optVal to int as it is in its current version.
That's basically what I did, I just made that call the same as the other setsockopt calls in the code (which all work on Windows as well AFAIK). For some reason that was in a different style, it's all cut and pasted from different sources... Today I'm on Debian, it works. Tonight will test Windows..
Martin
OK, two more bugs, first one being show-stopper at times totally freezing Pd.
When a tcpclient is connected and then tcpserver mysteriously disappears (e.g. it is cut from the patch or remote machine crashes) tcpclient properly reports disconnect. However, if one connects tcpclient and then presses redundantly connect one more time (and gets "error: already connected" message), if at this point tcpserver disappears, tcpclient does not properly report disconnection and continues to believe it is still connected. Consequent connects at random times may freeze Pd.
When running in -rt mode with jack clicking on connect twice in a row sometimes freezes Pd as well.
Seems to me whatever it is has to do with connect/disconnect aspect of tcpclient.
=======================================================================
the second (minor?) bug is the fact when tcpserver mysteriously disappears (e.g. a remote machine crashes or whatever) tcpclient that was connected to it does not properly reports that it has lost connection but if one clicks on disconnect, it still reports as if one has managed to disconnect rather than saying "not connected."
Both of these issues happen on Ubuntu/Linux.
Best wishes,
Ico
please see attached.
tcpclient was spawning a new thread every time you pressed connect regardless whether you are connected or not. Hence, crashes.
Still need to test second bug.
Ico
This apparently also fixes the second bug. Yay!
On Wed, 2010-05-05 at 18:15 -0400, Ivica Ico Bukvic wrote:
please see attached.
tcpclient was spawning a new thread every time you pressed connect regardless whether you are connected or not. Hence, crashes.
Still need to test second bug.
Ico
Attached is the tcpserver patch that threads broadcast call and does only one parsing, thus ostensibly offering improved performance and xrun-free operation.
Please note that I've not tested this with the broadcasting of a file (symbol) but it should work.
AFAIK now everything works in tcpserver/client on Linux and does so stably.
Many thanks to all for their help in facilitating this patch!
Best wishes,
Ico
Martin,
If we could somehow wrap the broadcast call for the tcpserver in a separate thread and minimize parsing redundancies to make it xrun-proof, I think this object would be absolutely perfect for our needs. I am more than happy to assist in this process if you could just please send me a brief flow of functions "broadcast" calls so that I can figure out where to look for potential fixes/improvements.
Many thanks!
Best wishes,
Ico
Ivica Ico Bukvic wrote:
Martin,
...
please send me a brief flow of functions "broadcast" calls so that I can figure out where to look for potential fixes/improvements.
Well broadcast calls tcpserver_send_bytes() once for each connected client. tcpserver_send_bytes() fills a buffer from the input message and spawns a thread running tcpserver_send_buf_thread() to send the content whenever the buffer is full or the message is completely converted from atoms to bytes.
I think as long as your messages all fit in a single buffer it's a lot easier. The snag is that tcpserver_send_bytes() can fill the buffer more than once for each message if the message is > 64k. Apart from that, to skip redundant conversion of atom to byte and multiple thread creation, I would make a single buffer from the input and pass it to a single thread that sends the same buffer to each connected client. Since the current tcpserver_send_bytes() can also be used to send very long files, it may be better to make a dedicated broadcast function that only does small packets (less than 65536 bytes), as broadcasting megabytes to many clients will no doubt peg the machine.
Martin
Hi all
Sorry to barge in without myself being involved in the matters, but I remember, that also IOhannes was working on addressing some issues of [tcpserver]. His results are in svn/externals/iem/iemnet. I haven't thoroughly tested, if those versions already solved the issues Ivica was posting. However, I thought it's worth mentioning those as well, so that forces might be joined. I have the impression, that similar problems are worked on twice on two completely different ends.
Roman
On Wed, 2010-05-05 at 23:57 -0400, Martin Peach wrote:
Ivica Ico Bukvic wrote:
Martin,
...
please send me a brief flow of functions "broadcast" calls so that I can figure out where to look for potential fixes/improvements.
Well broadcast calls tcpserver_send_bytes() once for each connected client. tcpserver_send_bytes() fills a buffer from the input message and spawns a thread running tcpserver_send_buf_thread() to send the content whenever the buffer is full or the message is completely converted from atoms to bytes.
I think as long as your messages all fit in a single buffer it's a lot easier. The snag is that tcpserver_send_bytes() can fill the buffer more than once for each message if the message is > 64k. Apart from that, to skip redundant conversion of atom to byte and multiple thread creation, I would make a single buffer from the input and pass it to a single thread that sends the same buffer to each connected client. Since the current tcpserver_send_bytes() can also be used to send very long files, it may be better to make a dedicated broadcast function that only does small packets (less than 65536 bytes), as broadcasting megabytes to many clients will no doubt peg the machine.
Martin
Pd-dev mailing list Pd-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Roman Haefeli wrote:
Hi all
posting. However, I thought it's worth mentioning those as well, so that forces might be joined. I have the impression, that similar problems are
sure
worked on twice on two completely different ends.
that was the reason for forking of mrpeach in the 1st place: to be able to work on "completely different ends" without having clashes. at some place, we should see how far the different ends go, how they overlap and whether we can/should merge again.
fgasdr IOhannes
IOhannes,
I tried your version of tcpserver/client and it unfortunately shows no improvement in our performance tests (16 wirelessly networked machines) with latency spikes sometimes as high as 1-2 seconds. In addition, your iteration of tcpserver/client suffers from the bugs that were fixed in Martin's version, namely misreporting number of connected clients, crashes, xruns when broadcasting, and stale sockets following the crash.
Could it be that this has something to do with how Pd deals with networking or are these externals completely independent from Pd's threading (given that they spawn their own threads pretty much or everything)?
Alternately, this may be simply the issue of cheap wireless cards built into our netbooks and/or driver issues.
Best wishes,
Ico
ola
On 2010-06-07 23:09, Ivica Ico Bukvic wrote:
IOhannes,
I tried your version of tcpserver/client and it unfortunately shows no improvement in our performance tests (16 wirelessly networked machines) with latency spikes sometimes as high as 1-2 seconds. In addition, your iteration of tcpserver/client suffers from the bugs that were fixed in Martin's version, namely misreporting number of connected clients, crashes, xruns when broadcasting, and stale sockets following the crash.
thanks for testing.
good to know that i cannot magically fix all problems in the world.
Could it be that this has something to do with how Pd deals with networking or are these externals completely independent from Pd's threading (given that they spawn their own threads pretty much or everything)?
they are entirely independent from Pd's networking and threading infrastructure.
Alternately, this may be simply the issue of cheap wireless cards built into our netbooks and/or driver issues.
aye. it's always easy to blame the hardware.
as i understand it, both miller's, my and martin's code (at least the old one before i forked) just fill up the socket's buffer and trust that the operating system will distribute the packets in an "ideal" way. martin's and my approaches try to minimize the impact this has on the Pd process (with different success - see "xruns")
mfgasdr IOhannes