On 23.07.2021 23:11, Roman Haefeli wrote:
On Fri, 2021-07-23 at 21:52 +0200, Christof Ressi wrote:When we overhauled the networking code, I noticed that the TCP and UDP functions would both read up to N bytes (where N is currently 4096) in a single recv() call. With TCP the buffer can contain several FUDI (or other) messages, but with UDP it would only contain a single packet. So UDP was effectively much more rate limited than TCP.I think, it _does_ make sense to treat TCP and UDP differently. With TCP, you probably want to consume only as much as you can eat at the time while keeping the rest buffered for later. OTOH, it's questionable whether "surplus" UDP packets should be stored for later. Rather - I think - they should be simply discarded.
What do you mean with "surplus" UDP packets? It's totally natural that the UDP receive buffer contains more than one packet at a given time... Packets might arrive in bulks and/or several hosts can send to the same socket. Why should we intentionally discard them?
I think the fundamental problem is this:
We we want to
a) drain the UDP receive buffer as fast as possible to avoid packet loss
b) avoid blocking the audio thread by processing too many messages
If we only have a single thread (like in Pd), there's a conflict
of interest.
A better design is to receive packets on a dedicated thread and
put them on a (ideally unbounded) queue. The audio thread can then
take packets and process them at its own pace. This is what the
Supercollider Server does, for example. It is also how I
personally write UDP server applications.
https://github.com/pure-data/pure-data/pull/1261 also goes into
this direction.
---
Back to TCP vs UDP: let's say you are sending 32-byte FUDI messages at a high rate. With the old behavior, TCP allowed to receive and dispatch 128 messages in a row (4096 bytes in total), but UDP only allowed a single message. For me, this didn't make any sense. With the new behavior you get the same number of messages.
---
Sorry, I don't understand this paragraph at all...It would be nice, if more than one packet could be received per tick, of course, but then to buffer could simply be flushed, so that only "fresh" packets are considered in the next tick. I _think_ that's what network devices do as well: send it now or forget it.
With blocksize=64, ideally one packet per tick is received. On macOS, it seems each tick without a packet delays the processing of the subsequent packets by one DSP block. After a few seconds on a bad connection (Wifi, for instance), the delay settles at 200-500ms and there is a very clean signal afterwards (which is not surprising with such a large buffer). On Linux, the latency stays in the range set by the receive buffer and late packets are perceived as drop out. It looks like not processed packets are flushed on Linux, but are not macOS.
Hmmm... usually incoming UDP packets are discarded if the UDP receive buffer is full. Are you saying that this is not the case on macOS? That would be very interesting. Maybe the buffer can automatically grow in size? Unfortunately, I couldn't find any information on this.
I guess because of the different behavior/size of the UDP receive buffer.Assuming the relevant poll functions you mentioned are the same for the same backend (JACK) on both platforms (macOS, Linux), why is the behavior still different?
You're welcome :-)Thanks a lot, Christof, for your time and effort in explaining the details. You already helped a lot. I'm somewhat relieved that the issue has an explanation.
Roman
_______________________________________________ Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://lists.puredata.info/listinfo/pd-list