I made a test patch for measuring udp paket rate [1] and it confirms what you already explained. Neither [netsend -u] nor [netrecieve -u] are affected by the rate limiting when using JACK. [iemnet/udpclient] is rate limited when using JACK, tested on Linux and macOS. The limit is exactly the DSP tick rate.
However, I also gained new (for me) insights. There indeed is a difference between macOS and Linux: the buffer for incoming UDP packets is much larger on macOS, several hundred kilobytes, I couldn't measure exactly because the GUI becomes very sluggish when triggering the rate limit, . On Linux, the receive buffer seems to be ~4kB, you as you already stated. That explains why we experience a large latency only on macOS and not on Linux.
The GUI slugginess also only occurs when using JACK. Maybe GUI updates are handled by the same polling function?
With another test patch that measures UDP data bandwidth of [netsend -u -b] -> [netsend -u -b], I also see a difference between using JACK and not using JACK. Using JACK negatively impacts the maximum bandwidth (bytes per second) that are transmitted. I measure 2 MB/s with 1kB- packets and 5.3 MB/s with 4kB-packets (Those numbers reflect what is received, not what is sent. There is a high-rate of packet loss when nearing the limit). Without JACK, the maximum throughput can be as high as 12 MB/s. The test patch is cluttered and not easily shareable, but I'll share anyway, if there is interest.
On Fri, 2021-07-23 at 23:59 +0200, Christof Ressi wrote:
On 23.07.2021 23:11, Roman Haefeli wrote:
It would be nice, if more than one packet could be received per tick, of course, but then to buffer could simply be flushed, so that only "fresh" packets are considered in the next tick. I _think_ that's what network devices do as well: send it now or forget it.
Sorry, I don't understand this paragraph at all...
Excuse my flippant wording.
Trying again: For the typical applications UDP is used for, it's probably desirable that the receive buffer is not too large. If the buffer is large and the incoming rate is exceeds the processing capacity, you get large delays. Often (mostly?), fresh packets are more interesting than older ones. Of course, there is a trade-off between avoiding packet loss and keeping latency short. On macOS, the receive buffer seems extensively large. I think making it consistent with the receive buffer on Linux would be a benefit for most UDP based applications.
With blocksize=64, ideally one packet per tick is received. On macOS, it seems each tick without a packet delays the processing of the subsequent packets by one DSP block. After a few seconds on a bad connection (Wifi, for instance), the delay settles at 200-500ms and there is a very clean signal afterwards (which is not surprising with such a large buffer). On Linux, the latency stays in the range set by the receive buffer and late packets are perceived as drop out. It looks like not processed packets are flushed on Linux, but are not macOS.
Hmmm... usually incoming UDP packets are discarded if the UDP receive buffer is full. Are you saying that this is not the case on macOS?
Yeah, they are discarded, too. From what I experience, the receive buffer is much larger on macOS, ~400kB? Check 'lag' in the udp-rate- test.pd patch. Packets start to be dropped when 'lag' reaches ~16,000. Since the payload is 12 bytes (+ 16 bytes UDP header): 28 x 16000 = 448000
The maximum 'lag' on Linux is 236, which would indicate a buffer of around 6kB.
Roman
[1] https://git.iem.at/pd/iemnet/uploads/1137d95137f1ddcdcedb9df15bdbb591/udp-ra...