reduzierer wrote:
From what I know, there is an internal buffer of ~4kB for the sending
sockets in both netclient and netserver (I can't recall whether this buffer is built into the externals or is part of the network subsystem of the OS). If that limit is hit, the Pd process is blocked until that buffer is emptied again (which leads obviously to audio drop-outs).
IMHO, this is a design flaw, which all sending net-externals suffer from. There is no way to check the state of this buffer, which would be required in order to avoid a buffer overrun. It would be sufficient to get notified only, when the buffer is completely emptied, so that you could design your patch in a way that it would only send the next message when the previous one is through.
IMHO it's not a design flaw, but part of TCP's design philosophy that tries to send packets until they are received. In the original concept, bombs would be dropping all over the countryside, destroying cables and data centres willy-nilly, while messages could still get through to the missile silos and AA gunners.
UDP was designed to send and forget.
If you are "broadcasting" in TCP you are actually sending separate messages to each recipient, with the OS providing overhead for each one until it has been acknowledged by the recipient. Obviously it's easy to do a DOS attack this way even on your own machine simply by sending faster than the receiver can process the packets.
Broadcasting in UDP sends a single packet to a single address that the router sends to every machine on the subnet. The OS discards the buffer as soon as it is sent, so you can't overload the stack, although you can always peg the CPU trying.
Martin