Thanks all for your replies. Please see my comments below.
If you are "broadcasting" in TCP you are actually sending separate messages to each recipient, with the OS providing overhead for each one until it has been acknowledged by the recipient. Obviously it's easy to do a DOS attack this way even on your own machine simply by sending faster than the receiver can process the packets.
The problem is there is no way the messages we've been sending cross the threshold of 4096 bytes. I am basically sending between 16 and 27 lines from the coll object using the dump option. The catch is they all arrive at the same time to the netserver which is causing DIO errors. Each line is no more than perhaps dozen or so characters long (in most cases less and in very few perhaps a couple characters over), so like I said I cannot imagine this crossing the 4096 byte threshold (this one is BTW defined in netclient or netserver in maxlib, not sure if that is a part of TCP standard or a local definition).
According to Iohannes's suggestion, having coll iterated through using a metro with a 10ms cycle (so that each message comes out in succession with 10ms delays) instead of using dump that sends all of them at once, abates (or at least minimizes) the problem.
The issue is that all the clients also send stuff back at the netserver so it is not only a matter of broadcasting but also receiving and I think this may have been a problem.
Broadcasting in UDP sends a single packet to a single address that the router sends to every machine on the subnet. The OS discards the buffer as soon as it is sent, so you can't overload the stack, although you can always peg the CPU trying.
I understand this. The reason we went with TCP is beause in L2Ork (Linux Laptop Orchestra) we currently have up to 15 performers all networked. We send each performer's current status via UDP broadcast (x.x.x.255) address as this is something clients send every 10th of a second so losing a packet or two is not an issue. However, broadcasting a critical cue or coll information that must not be lost as it is critical to the performance, we simply could not afford to do over UDP. More precisely, we tried in the past and I found myself having to send a cue 3-4 times in a row to ensure everyone got it. This is why we need both approaches.
So, what I learned so far was that:
netclient/netserver is quite susceptible to network traffic, much more so than other TCP/IP implementations--in our myu library, one can send ~25fps of 1024x768 RGBA float point matrices via TCP/IP with no hold-up, albeit with considerable CPU overhead, so 4096 bottleneck seems quite arbitrary IMHO.
I discovered a bug in maxlib/netclient where its connect outlet (1) does not report disconnections as expected, rendering that outlet useless. I will forward a patch shortly.
So far best results can be achieved combining UDP for non-time-critical events and using tcp netclient/server (which we chose out of convenience over netsend/netreceive as we have so many clients to connect to) for time-critical cues (e.g. conductor events) *in conjunction* with using manual itration in 10ms increments of messages from coll to avoid swamping netclient with one large incoming stream (this one still sounds awfully dubious when you consider aforesaid myu library).
Best wishes,
Ico