Ivica Ico Bukvic wrote:
Martin,
I am looking into further improving tcpserver/tcpclient as we've encountered a number of issues while using it with L2Ork. I am aware that we are currently using somewhat older version (0.42.5) plus the patches I forwarded to you so please take this into account when reading the following observations.
There are 3 issues I can think of off top of my head:
- when closing the patch with clients connected I get stuck port on a regular basis even though as far as I could see in your source you've specified the flag that should allow for it to be freed near instantaneously provided it is stuck in TIME_WAIT mode, namely:
933: if (setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, 0, 0) < 0) post("setsockopt failed\n");
On Ubuntu 9.10 one has to wait for 60 seconds (default system setting) for the port to be freed. I've tried customizing port timeout system-wide but that had no effect which appears to be Ubuntu issue. Still, I am trying to figure out why the external is not closing the port cleanly. The end-result is that I effectively have to wait up to 60 seconds between different pieces in order to be able to spawn a new conductor instance (basically a focal intercommunication patch that deals with time-sensitive control data). Following link suggests that perhaps the socket is stuck in a different mode upon close which makes the aforesaid code irrelevant, thus suggesting that destructor may not be doing things quite right: http://www.unixguide.net/network/socketfaq/4.5.shtml
OK, thanks for finding that. I fixed it in svn, the way it is written above will not set SO_REUSEADDR because the parameter is false (0). Also it was #ifdeffed out except for IRIX. You should try an autobuild from tomorrow.
- when using broadcast option, it *consistently* causes audio xruns even though your iteration appears to be heavily threaded. This has been such a large problem I had to design a coll-based iteration that keeps count of active connections and their associated sockets and then using a metro dispatching messages to individual clients at 5ms intervals by prefixing them with appropriate socket number until coll list is exhausted. This is a terrible solution if one wishes to keep network jitter to a minimum between clients (actually it is a terrible solution no matter what) as in this case in a 16-member ensemble, last member can be receiving message as late as 16*5 or 90ms (plus obviously inherent network latency). Anything less than 5ms between sends yields unstable results.
Yes, that's because in tcp broadcast it sends individual packets to each client, which can eat up time. I suggest using udp for data and tcp for control, as udp broadcast will send one packet to everyone on the subnet.
- at times tcpserver misreports number of connections and tends to crash, almost as if it fails to properly detect disconnection.
I think that was fixed a few weeks ago.
Any ideas what might be the problem and whether any of these problems have been addressed in the latest version?
Not sure which version you have, but try it now: http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeac...
Martin