Hi all,
We experience "unexpected" latency with our software tpf-client [1] on macOS. When using JACK as audio-backend and using a blocksize [2] of 64, latency grows over time, growing whenever there are dropped packets.
The issue appears only when all three criteria are met:
On Linux, the patch runs fine, even with JACK as backend and using blocksize=64. When using CoreAudio on macOS, there is also no growing latency. Using JACK on macOS also works when using blocksize=128.
I can only guess what is going on, but it seems that with JACK Pd can only digest a limited number of incoming UDP packets. With blocksize=64, it would receive one packet per DSP block on average. Maybe Pd can only consume one packet per DSP tick with JACK as backend? I once made a test patch for measuring maximum UDP throughput that showed some hard limit. Because I didn't get the same values on two different runs, I decided to pursue some deeper investigation another time and suspected a problem in my test patch. It didn't occur to me back then that the difference was between -jack and -nosound. The difference with the test patch is significant also on Linux, though the problem described above appears only on macOS.
This is with:
The tpf-client patch is using [iemnet/udpclient] for sending and receiving JackTrip packets. The test patch shows similar difference for both, [netsend -u -b] and [iemnet/udpclient].
Even if it can be fixed, I'm interested in having a deeper understanding of what is going on.
Curiously, Roman
[1] https://github.com/zhdk/tpf-client (Pd-patch)
[2] blocksize here is an application internal parameter unrelated to Pd's or JACK's blocksize. It defines how many samples per channel go into one JackTrip packet.
Hi,
I assume you're using [iemnet] or [mrpeach] objects? Those only read a single UDP packet in the poll function.
[netreceive], on the other hand, reads several packets up to a certain throttle limit.
Now to the actual problem:
The poll functions are called in sys_domicrosleep(). sys_domicrosleep() is usually only called *once* per scheduler tick.
Actually, I have already noticed this problem with the portaudio backend a while ago. Miller has added a fix by replacing usleep() with sys_microsleep() in pa_send_dacs(). This means that when the scheduler has to wait for the ringbuffer, it will continue polling sockets and only go to sleep if there's nothing more to do.
jack_send_dacs(), on the other hand, immediately waits for a condition variable. However, a similar solution could be implemented here:
In a loop, check if the Jack buffer is available; if no, first try to call sys_domicrosleep() and only if that returns 0, call pthread_cond_wait().
I guess similar solutions would have to be applied to the audio backends as well...
Christof
On 23.07.2021 14:39, Roman Haefeli wrote:
Hi all,
We experience "unexpected" latency with our software tpf-client [1] on macOS. When using JACK as audio-backend and using a blocksize [2] of 64, latency grows over time, growing whenever there are dropped packets.
The issue appears only when all three criteria are met:
- Pd is running on macOS
- Pd is using JACK as backend
- blocksize is 64
On Linux, the patch runs fine, even with JACK as backend and using blocksize=64. When using CoreAudio on macOS, there is also no growing latency. Using JACK on macOS also works when using blocksize=128.
I can only guess what is going on, but it seems that with JACK Pd can only digest a limited number of incoming UDP packets. With blocksize=64, it would receive one packet per DSP block on average. Maybe Pd can only consume one packet per DSP tick with JACK as backend? I once made a test patch for measuring maximum UDP throughput that showed some hard limit. Because I didn't get the same values on two different runs, I decided to pursue some deeper investigation another time and suspected a problem in my test patch. It didn't occur to me back then that the difference was between -jack and -nosound. The difference with the test patch is significant also on Linux, though the problem described above appears only on macOS.
This is with:
- Pd 0.51.4
- macOS 10.15.7
- JACK 0.92.3 (but happens also with newer JACK 1.9)
The tpf-client patch is using [iemnet/udpclient] for sending and receiving JackTrip packets. The test patch shows similar difference for both, [netsend -u -b] and [iemnet/udpclient].
Even if it can be fixed, I'm interested in having a deeper understanding of what is going on.
Curiously, Roman
[1] https://github.com/zhdk/tpf-client (Pd-patch)
[2] blocksize here is an application internal parameter unrelated to Pd's or JACK's blocksize. It defines how many samples per channel go into one JackTrip packet.
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://lists.puredata.info/listinfo/pd-list
On Fri, 2021-07-23 at 15:09 +0200, Christof Ressi wrote:
I assume you're using [iemnet] or [mrpeach] objects?
Yes, [iemnet/udpclient]. But it seems the same applies to [netsend] (when receiving).
Those only read a single UDP packet in the poll function.
OK, good to know. I'm glad I was not way off with my assumption. Would it be technically possible for the reader thread to read as many packets as available during a single tick? I mean could this be addressed in iemnet without touching Pd code?
[netreceive], on the other hand, reads several packets up to a certain throttle limit.
I see. I haven't tested with [netreceive] yet, since I need a bi- directional connection. [netsend] behaves similar to [iemnet/udpclient].
Now to the actual problem:
[...]
Thanks a lot for the detailed explanation!
I guess similar solutions would have to be applied to the audio backends as well...
This I don't understand. You mean the additional check if the poll function has something available needs to be implemented for _other_ audio backends as well, as in: for each separately?
Roman
Those only read a single UDP packet in the poll function.
OK, good to know. I'm glad I was not way off with my assumption. Would it be technically possible for the reader thread to read as many packets as available during a single tick?
Generally, it's not a good idea to read as many packets as *available*, because it could completely stall the audio thread for an undetermined amount of time, causing audio dropouts.
I think the current compromise works quite well: Call the poll function at least once per DSP tick + while the scheduler can't advance (because it has to wait for the ring buffer). This makes sure that we read as many packets as possible without stalling the audio thread.
I mean could this be addressed in iemnet without touching Pd code?
It can be partially addressed in [iemnet].
When we overhauled the networking code, I noticed that the TCP and UDP functions would both read up to N bytes (where N is currently 4096) in a single recv() call. With TCP the buffer can contain several FUDI (or other) messages, but with UDP it would only contain a single packet. So UDP was effectively much more rate limited than TCP.
My solution was this: we keep receiving UDP packets in a loop as long as there are pending UDP packets (see socket_bytes_available) *and* we haven't read more than N bytes in total.
So the problem can be tackled from both sides:
the Jack audio backend should be fixed to poll the sockets while idle
[iemnet] could try to receive more than one UDP packet in the poll
function
I guess similar solutions would have to be applied to the audio backends as well...
This I don't understand. You mean the additional check if the poll function has something available needs to be implemented for _other_ audio backends as well, as in: for each separately?
Sorry, yes, I dropped the "other" :-)
Roman
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://lists.puredata.info/listinfo/pd-list
On Fri, 2021-07-23 at 21:52 +0200, Christof Ressi wrote:
When we overhauled the networking code, I noticed that the TCP and UDP functions would both read up to N bytes (where N is currently 4096) in a single recv() call. With TCP the buffer can contain several FUDI (or other) messages, but with UDP it would only contain a single packet. So UDP was effectively much more rate limited than TCP.
I think, it _does_ make sense to treat TCP and UDP differently. With TCP, you probably want to consume only as much as you can eat at the time while keeping the rest buffered for later. OTOH, it's questionable whether "surplus" UDP packets should be stored for later. Rather - I think - they should be simply discarded. It would be nice, if more than one packet could be received per tick, of course, but then to buffer could simply be flushed, so that only "fresh" packets are considered in the next tick. I _think_ that's what network devices do as well: send it now or forget it.
With blocksize=64, ideally one packet per tick is received. On macOS, it seems each tick without a packet delays the processing of the subsequent packets by one DSP block. After a few seconds on a bad connection (Wifi, for instance), the delay settles at 200-500ms and there is a very clean signal afterwards (which is not surprising with such a large buffer). On Linux, the latency stays in the range set by the receive buffer and late packets are perceived as drop out. It looks like not processed packets are flushed on Linux, but are not macOS. Assuming the relevant poll functions you mentioned are the same for the same backend (JACK) on both platforms (macOS, Linux), why is the behavior still different?
My solution was this:
You mean, as implemented in [netreceive -u]?
we keep receiving UDP packets in a loop as long as there are pending UDP packets (see socket_bytes_available) *and* we haven't read more than N bytes in total.
Sounds sensible to me.
Thanks a lot, Christof, for your time and effort in explaining the details. You already helped a lot. I'm somewhat relieved that the issue has an explanation.
Roman
On 23.07.2021 23:11, Roman Haefeli wrote:
On Fri, 2021-07-23 at 21:52 +0200, Christof Ressi wrote:
When we overhauled the networking code, I noticed that the TCP and UDP functions would both read up to N bytes (where N is currently 4096) in a single recv() call. With TCP the buffer can contain several FUDI (or other) messages, but with UDP it would only contain a single packet. So UDP was effectively much more rate limited than TCP.
I think, it _does_ make sense to treat TCP and UDP differently. With TCP, you probably want to consume only as much as you can eat at the time while keeping the rest buffered for later. OTOH, it's questionable whether "surplus" UDP packets should be stored for later. Rather - I think - they should be simply discarded.
What do you mean with "surplus" UDP packets? It's totally natural that the UDP receive buffer contains more than one packet at a given time... Packets might arrive in bulks and/or several hosts can send to the same socket. Why should we intentionally discard them?
I think the fundamental problem is this:
We we want to
a) drain the UDP receive buffer as fast as possible to avoid packet loss
b) avoid blocking the audio thread by processing too many messages
If we only have a single thread (like in Pd), there's a conflict of interest.
A better design is to receive packets on a dedicated thread and put them on a (ideally unbounded) queue. The audio thread can then take packets and process them at its own pace. This is what the Supercollider Server does, for example. It is also how I personally write UDP server applications. https://github.com/pure-data/pure-data/pull/1261 also goes into this direction.
Back to TCP vs UDP: let's say you are sending 32-byte FUDI messages at a high rate. With the old behavior, TCP allowed to receive and dispatch 128 messages in a row (4096 bytes in total), but UDP only allowed a single message. For me, this didn't make any sense. With the new behavior you get the same number of messages.
It would be nice, if more than one packet could be received per tick, of course, but then to buffer could simply be flushed, so that only "fresh" packets are considered in the next tick. I _think_ that's what network devices do as well: send it now or forget it.
Sorry, I don't understand this paragraph at all...
With blocksize=64, ideally one packet per tick is received. On macOS, it seems each tick without a packet delays the processing of the subsequent packets by one DSP block. After a few seconds on a bad connection (Wifi, for instance), the delay settles at 200-500ms and there is a very clean signal afterwards (which is not surprising with such a large buffer). On Linux, the latency stays in the range set by the receive buffer and late packets are perceived as drop out. It looks like not processed packets are flushed on Linux, but are not macOS.
Hmmm... usually incoming UDP packets are discarded if the UDP receive buffer is full. Are you saying that this is not the case on macOS? That would be very interesting. Maybe the buffer can automatically grow in size? Unfortunately, I couldn't find any information on this.
Assuming the relevant poll functions you mentioned are the same for the same backend (JACK) on both platforms (macOS, Linux), why is the behavior still different?
I guess because of the different behavior/size of the UDP receive buffer.
Thanks a lot, Christof, for your time and effort in explaining the details. You already helped a lot. I'm somewhat relieved that the issue has an explanation.
You're welcome :-)
Roman
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://lists.puredata.info/listinfo/pd-list
On Fri, Jul 23, 2021 at 11:59:26PM +0200, Christof Ressi wrote:
On 23.07.2021 23:11, Roman Haefeli wrote:
On Fri, 2021-07-23 at 21:52 +0200, Christof Ressi wrote:
When we overhauled the networking code, I noticed that the TCP and UDP functions would both read up to N bytes (where N is currently 4096) in a single recv() call. With TCP the buffer can contain several FUDI (or other) messages, but with UDP it would only contain a single packet. So UDP was effectively much more rate limited than TCP.
I think, it _does_ make sense to treat TCP and UDP differently. With TCP, you probably want to consume only as much as you can eat at the time while keeping the rest buffered for later. OTOH, it's questionable whether "surplus" UDP packets should be stored for later. Rather - I think - they should be simply discarded.
As I understand it, when Pd is idle (finishes a 64-sample block and can't yet crunch the following one), then it goes back and re-checks for network or GUI input, and keeps doing that until either there isn't anything to read or else the next block becomes runnable. So this is a simple throttling mechanism. (And it's not necessary to put this on another thread).
However, in the context of libpd thre's no concept of "idle" and so in that setup there's only one network read per block.
And yes, this can lead to delays since the Macintosh helpfully stores unread packets until the reading process gets around to realding them. I think that also only happens in the context of libpd.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Miller
Hi Miller,
As I understand it, when Pd is idle (finishes a 64-sample block and can't yet crunch the following one), then it goes back and re-checks for network or GUI input, and keeps doing that until either there isn't anything to read or else the next block becomes runnable.
AFAICT, that's true for pa_send_dacs(), but not for jack_send_dacs(). The former calls sys_microsleep() in a loop, so it will only actually sleep if there are no more sockets to read; the latter immediately blocks on a condition variable if the buffer is not available.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Since libpd manages the audio callback, the client can simply call sys_pollgui() as often as they want/need. I don't think that you actually have to add anything. Maybe just add a more fitting alias, like sys_poll() or sys_pollsockets()? sys_pollgui() is really a misnomer...
Christof
So this is a simple throttling mechanism. (And it's not necessary to put this on another thread).
However, in the context of libpd thre's no concept of "idle" and so in that setup there's only one network read per block.
And yes, this can lead to delays since the Macintosh helpfully stores unread packets until the reading process gets around to realding them. I think that also only happens in the context of libpd.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Miller
Aha.. I've been meaning to look at why jack doesn't respond to audio latency setting... I think it needs fooling with.
cheers M
On Sat, Jul 24, 2021 at 01:01:43AM +0200, Christof Ressi wrote:
Hi Miller,
As I understand it, when Pd is idle (finishes a 64-sample block and can't yet crunch the following one), then it goes back and re-checks for network or GUI input, and keeps doing that until either there isn't anything to read or else the next block becomes runnable.
AFAICT, that's true for pa_send_dacs(), but not for jack_send_dacs(). The former calls sys_microsleep() in a loop, so it will only actually sleep if there are no more sockets to read; the latter immediately blocks on a condition variable if the buffer is not available.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Since libpd manages the audio callback, the client can simply call sys_pollgui() as often as they want/need. I don't think that you actually have to add anything. Maybe just add a more fitting alias, like sys_poll() or sys_pollsockets()? sys_pollgui() is really a misnomer...
Christof
So this is a simple throttling mechanism. (And it's not necessary to put this on another thread).
However, in the context of libpd thre's no concept of "idle" and so in that setup there's only one network read per block.
And yes, this can lead to delays since the Macintosh helpfully stores unread packets until the reading process gets around to realding them. I think that also only happens in the context of libpd.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Miller
Hmmm... in the jack backend there is no ring buffer. Jack just dumps a large buffer from the audio callback and notifies the Pd audio thread, which consumes the buffer in chunks of 64 samples.
I think you could just copy the port audio backend implementation with its lock-free ringbuffer. This would also solve the socket polling issue :-)
Maybe the "polling scheduler" part could even be moved out of "s_audio_pa.c" and shared with the jack backend.
On the other hand, I've been wondering if the "Delay" parameter is actually necessary at all. As the Jack backend demonstrates, you can just as well use a larger hardware buffer size. The only advantages I see are:
sizes are usually power-of-2s, so there's no step between 11.6 ms (512 samples) and 23.3 ms (1024 samples), for example.
is limited (often 1024 samples)
Unfortunately, very few people actually seem to understand the difference between the "block size" (= hardware buffer size) and the "Delay" in Pd's audio settings and how they interact... But maybe that's just because of the lack of documentation. After all, I only understood it after reading the code :-)
Christof
On 24.07.2021 03:05, Miller Puckette wrote:
Aha.. I've been meaning to look at why jack doesn't respond to audio latency setting... I think it needs fooling with.
cheers M
On Sat, Jul 24, 2021 at 01:01:43AM +0200, Christof Ressi wrote:
Hi Miller,
As I understand it, when Pd is idle (finishes a 64-sample block and can't yet crunch the following one), then it goes back and re-checks for network or GUI input, and keeps doing that until either there isn't anything to read or else the next block becomes runnable.
AFAICT, that's true for pa_send_dacs(), but not for jack_send_dacs(). The former calls sys_microsleep() in a loop, so it will only actually sleep if there are no more sockets to read; the latter immediately blocks on a condition variable if the buffer is not available.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Since libpd manages the audio callback, the client can simply call sys_pollgui() as often as they want/need. I don't think that you actually have to add anything. Maybe just add a more fitting alias, like sys_poll() or sys_pollsockets()? sys_pollgui() is really a misnomer...
Christof
So this is a simple throttling mechanism. (And it's not necessary to put this on another thread).
However, in the context of libpd thre's no concept of "idle" and so in that setup there's only one network read per block.
And yes, this can lead to delays since the Macintosh helpfully stores unread packets until the reading process gets around to realding them. I think that also only happens in the context of libpd.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Miller
The only advantages I see are:
Yesterday was a bit too late, so I forgot the most important advantage of the "Delay" parameter: reduced overall latency!
Generally, in the polling scheduler, the total input latency is the hardware buffer size + the "internal" buffer size. In contrast to the callback scheduler, a larger hardware buffer size itself doesn't buy you anything - it is really the internal buffer that gives you the extra leeway.
The polling scheduler in the Jack back end uses a simple double buffering scheme where the size of the buffer has to be the same as the hardware buffer size. As a consequence, you always end up with unnecessary extra latency.
On the other hand, the Portaudio back end uses a true ring buffer whose size is independent from the hardware buffer size. Usually, you would set the hardware buffer size to the lowest possible stable value (ideally 64 samples) and only control the latency via the size of the ring buffer (= "Delay").
If my analysis is correct (if not, please let me know!), I think the Jack backend should really adapt the lock-free FIFO from the Portaudio backend.
Christof
On 24.07.2021 04:16, Christof Ressi wrote:
Hmmm... in the jack backend there is no ring buffer. Jack just dumps a large buffer from the audio callback and notifies the Pd audio thread, which consumes the buffer in chunks of 64 samples.
I think you could just copy the port audio backend implementation with its lock-free ringbuffer. This would also solve the socket polling issue :-)
Maybe the "polling scheduler" part could even be moved out of "s_audio_pa.c" and shared with the jack backend.
On the other hand, I've been wondering if the "Delay" parameter is actually necessary at all. As the Jack backend demonstrates, you can just as well use a larger hardware buffer size. The only advantages I see are:
- latency can be controlled at a finer granularity; hardware buffer
sizes are usually power-of-2s, so there's no step between 11.6 ms (512 samples) and 23.3 ms (1024 samples), for example.
- the latency can be set arbitrarily high while the hardware buffer
size is limited (often 1024 samples)
Unfortunately, very few people actually seem to understand the difference between the "block size" (= hardware buffer size) and the "Delay" in Pd's audio settings and how they interact... But maybe that's just because of the lack of documentation. After all, I only understood it after reading the code :-)
Christof
On 24.07.2021 03:05, Miller Puckette wrote:
Aha.. I've been meaning to look at why jack doesn't respond to audio latency setting... I think it needs fooling with.
cheers M
On Sat, Jul 24, 2021 at 01:01:43AM +0200, Christof Ressi wrote:
Hi Miller,
As I understand it, when Pd is idle (finishes a 64-sample block and can't yet crunch the following one), then it goes back and re-checks for network or GUI input, and keeps doing that until either there isn't anything to read or else the next block becomes runnable.
AFAICT, that's true for pa_send_dacs(), but not for jack_send_dacs(). The former calls sys_microsleep() in a loop, so it will only actually sleep if there are no more sockets to read; the latter immediately blocks on a condition variable if the buffer is not available.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Since libpd manages the audio callback, the client can simply call sys_pollgui() as often as they want/need. I don't think that you actually have to add anything. Maybe just add a more fitting alias, like sys_poll() or sys_pollsockets()? sys_pollgui() is really a misnomer...
Christof
So this is a simple throttling mechanism. (And it's not necessary to put this on another thread).
However, in the context of libpd thre's no concept of "idle" and so in that setup there's only one network read per block.
And yes, this can lead to delays since the Macintosh helpfully stores unread packets until the reading process gets around to realding them. I think that also only happens in the context of libpd.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Miller
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://lists.puredata.info/listinfo/pd-list
Yes. Also, I beleive jacks' setup assumes that all its clients share the same latency (set via jack's buffer size and number of buffers) so if you want to mix high-and-lower latency operations you might need to add some to a Pd instance.
'Git blame' is blaming almost the entire s_audio_jack file on me, although I'm sure someone else initially wrote it, probably Iohannes. But the current state of the callback scheme is certainly all my fault.
I think it's best to tweak audio stuff as early as possible in the Pd dev cycle since it tends to need more shaking out than other components, so I'll put that at the head of the list for now.
cheers M
On Sat, Jul 24, 2021 at 04:19:43PM +0200, Christof Ressi wrote:
The only advantages I see are:
Yesterday was a bit too late, so I forgot the most important advantage of the "Delay" parameter: reduced overall latency!
Generally, in the polling scheduler, the total input latency is the hardware buffer size + the "internal" buffer size. In contrast to the callback scheduler, a larger hardware buffer size itself doesn't buy you anything - it is really the internal buffer that gives you the extra leeway.
The polling scheduler in the Jack back end uses a simple double buffering scheme where the size of the buffer has to be the same as the hardware buffer size. As a consequence, you always end up with unnecessary extra latency.
On the other hand, the Portaudio back end uses a true ring buffer whose size is independent from the hardware buffer size. Usually, you would set the hardware buffer size to the lowest possible stable value (ideally 64 samples) and only control the latency via the size of the ring buffer (= "Delay").
If my analysis is correct (if not, please let me know!), I think the Jack backend should really adapt the lock-free FIFO from the Portaudio backend.
Christof
On 24.07.2021 04:16, Christof Ressi wrote:
Hmmm... in the jack backend there is no ring buffer. Jack just dumps a large buffer from the audio callback and notifies the Pd audio thread, which consumes the buffer in chunks of 64 samples.
I think you could just copy the port audio backend implementation with its lock-free ringbuffer. This would also solve the socket polling issue :-)
Maybe the "polling scheduler" part could even be moved out of "s_audio_pa.c" and shared with the jack backend.
On the other hand, I've been wondering if the "Delay" parameter is actually necessary at all. As the Jack backend demonstrates, you can just as well use a larger hardware buffer size. The only advantages I see are:
- latency can be controlled at a finer granularity; hardware buffer
sizes are usually power-of-2s, so there's no step between 11.6 ms (512 samples) and 23.3 ms (1024 samples), for example.
- the latency can be set arbitrarily high while the hardware buffer size
is limited (often 1024 samples)
Unfortunately, very few people actually seem to understand the difference between the "block size" (= hardware buffer size) and the "Delay" in Pd's audio settings and how they interact... But maybe that's just because of the lack of documentation. After all, I only understood it after reading the code :-)
Christof
On 24.07.2021 03:05, Miller Puckette wrote:
Aha.. I've been meaning to look at why jack doesn't respond to audio latency setting... I think it needs fooling with.
cheers M
On Sat, Jul 24, 2021 at 01:01:43AM +0200, Christof Ressi wrote:
Hi Miller,
As I understand it, when Pd is idle (finishes a 64-sample block and can't yet crunch the following one), then it goes back and re-checks for network or GUI input, and keeps doing that until either there isn't anything to read or else the next block becomes runnable.
AFAICT, that's true for pa_send_dacs(), but not for jack_send_dacs(). The former calls sys_microsleep() in a loop, so it will only actually sleep if there are no more sockets to read; the latter immediately blocks on a condition variable if the buffer is not available.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Since libpd manages the audio callback, the client can simply call sys_pollgui() as often as they want/need. I don't think that you actually have to add anything. Maybe just add a more fitting alias, like sys_poll() or sys_pollsockets()? sys_pollgui() is really a misnomer...
Christof
So this is a simple throttling mechanism. (And it's not necessary to put this on another thread).
However, in the context of libpd thre's no concept of "idle" and so in that setup there's only one network read per block.
And yes, this can lead to delays since the Macintosh helpfully stores unread packets until the reading process gets around to realding them. I think that also only happens in the context of libpd.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Miller
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.puredata.info_lis...
Well, my first attempt to fix things added 5 msec tothe latency, hmm.
On Sat, Jul 24, 2021 at 10:05:35AM -0700, Miller Puckette via Pd-list wrote:
Yes. Also, I beleive jacks' setup assumes that all its clients share the same latency (set via jack's buffer size and number of buffers) so if you want to mix high-and-lower latency operations you might need to add some to a Pd instance.
'Git blame' is blaming almost the entire s_audio_jack file on me, although I'm sure someone else initially wrote it, probably Iohannes. But the current state of the callback scheme is certainly all my fault.
I think it's best to tweak audio stuff as early as possible in the Pd dev cycle since it tends to need more shaking out than other components, so I'll put that at the head of the list for now.
cheers M
On Sat, Jul 24, 2021 at 04:19:43PM +0200, Christof Ressi wrote:
The only advantages I see are:
Yesterday was a bit too late, so I forgot the most important advantage of the "Delay" parameter: reduced overall latency!
Generally, in the polling scheduler, the total input latency is the hardware buffer size + the "internal" buffer size. In contrast to the callback scheduler, a larger hardware buffer size itself doesn't buy you anything - it is really the internal buffer that gives you the extra leeway.
The polling scheduler in the Jack back end uses a simple double buffering scheme where the size of the buffer has to be the same as the hardware buffer size. As a consequence, you always end up with unnecessary extra latency.
On the other hand, the Portaudio back end uses a true ring buffer whose size is independent from the hardware buffer size. Usually, you would set the hardware buffer size to the lowest possible stable value (ideally 64 samples) and only control the latency via the size of the ring buffer (= "Delay").
If my analysis is correct (if not, please let me know!), I think the Jack backend should really adapt the lock-free FIFO from the Portaudio backend.
Christof
On 24.07.2021 04:16, Christof Ressi wrote:
Hmmm... in the jack backend there is no ring buffer. Jack just dumps a large buffer from the audio callback and notifies the Pd audio thread, which consumes the buffer in chunks of 64 samples.
I think you could just copy the port audio backend implementation with its lock-free ringbuffer. This would also solve the socket polling issue :-)
Maybe the "polling scheduler" part could even be moved out of "s_audio_pa.c" and shared with the jack backend.
On the other hand, I've been wondering if the "Delay" parameter is actually necessary at all. As the Jack backend demonstrates, you can just as well use a larger hardware buffer size. The only advantages I see are:
- latency can be controlled at a finer granularity; hardware buffer
sizes are usually power-of-2s, so there's no step between 11.6 ms (512 samples) and 23.3 ms (1024 samples), for example.
- the latency can be set arbitrarily high while the hardware buffer size
is limited (often 1024 samples)
Unfortunately, very few people actually seem to understand the difference between the "block size" (= hardware buffer size) and the "Delay" in Pd's audio settings and how they interact... But maybe that's just because of the lack of documentation. After all, I only understood it after reading the code :-)
Christof
On 24.07.2021 03:05, Miller Puckette wrote:
Aha.. I've been meaning to look at why jack doesn't respond to audio latency setting... I think it needs fooling with.
cheers M
On Sat, Jul 24, 2021 at 01:01:43AM +0200, Christof Ressi wrote:
Hi Miller,
As I understand it, when Pd is idle (finishes a 64-sample block and can't yet crunch the following one), then it goes back and re-checks for network or GUI input, and keeps doing that until either there isn't anything to read or else the next block becomes runnable.
AFAICT, that's true for pa_send_dacs(), but not for jack_send_dacs(). The former calls sys_microsleep() in a loop, so it will only actually sleep if there are no more sockets to read; the latter immediately blocks on a condition variable if the buffer is not available.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Since libpd manages the audio callback, the client can simply call sys_pollgui() as often as they want/need. I don't think that you actually have to add anything. Maybe just add a more fitting alias, like sys_poll() or sys_pollsockets()? sys_pollgui() is really a misnomer...
Christof
So this is a simple throttling mechanism. (And it's not necessary to put this on another thread).
However, in the context of libpd thre's no concept of "idle" and so in that setup there's only one network read per block.
And yes, this can lead to delays since the Macintosh helpfully stores unread packets until the reading process gets around to realding them. I think that also only happens in the context of libpd.
I need to add some sort of poll-it-again functionality to libpd but haven't figured out what shape it should take yet.
Miller
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.puredata.info_lis...
--
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.puredata.info_lis...
I made a test patch for measuring udp paket rate [1] and it confirms what you already explained. Neither [netsend -u] nor [netrecieve -u] are affected by the rate limiting when using JACK. [iemnet/udpclient] is rate limited when using JACK, tested on Linux and macOS. The limit is exactly the DSP tick rate.
However, I also gained new (for me) insights. There indeed is a difference between macOS and Linux: the buffer for incoming UDP packets is much larger on macOS, several hundred kilobytes, I couldn't measure exactly because the GUI becomes very sluggish when triggering the rate limit, . On Linux, the receive buffer seems to be ~4kB, you as you already stated. That explains why we experience a large latency only on macOS and not on Linux.
The GUI slugginess also only occurs when using JACK. Maybe GUI updates are handled by the same polling function?
With another test patch that measures UDP data bandwidth of [netsend -u -b] -> [netsend -u -b], I also see a difference between using JACK and not using JACK. Using JACK negatively impacts the maximum bandwidth (bytes per second) that are transmitted. I measure 2 MB/s with 1kB- packets and 5.3 MB/s with 4kB-packets (Those numbers reflect what is received, not what is sent. There is a high-rate of packet loss when nearing the limit). Without JACK, the maximum throughput can be as high as 12 MB/s. The test patch is cluttered and not easily shareable, but I'll share anyway, if there is interest.
On Fri, 2021-07-23 at 23:59 +0200, Christof Ressi wrote:
On 23.07.2021 23:11, Roman Haefeli wrote:
It would be nice, if more than one packet could be received per tick, of course, but then to buffer could simply be flushed, so that only "fresh" packets are considered in the next tick. I _think_ that's what network devices do as well: send it now or forget it.
Sorry, I don't understand this paragraph at all...
Excuse my flippant wording.
Trying again: For the typical applications UDP is used for, it's probably desirable that the receive buffer is not too large. If the buffer is large and the incoming rate is exceeds the processing capacity, you get large delays. Often (mostly?), fresh packets are more interesting than older ones. Of course, there is a trade-off between avoiding packet loss and keeping latency short. On macOS, the receive buffer seems extensively large. I think making it consistent with the receive buffer on Linux would be a benefit for most UDP based applications.
With blocksize=64, ideally one packet per tick is received. On macOS, it seems each tick without a packet delays the processing of the subsequent packets by one DSP block. After a few seconds on a bad connection (Wifi, for instance), the delay settles at 200-500ms and there is a very clean signal afterwards (which is not surprising with such a large buffer). On Linux, the latency stays in the range set by the receive buffer and late packets are perceived as drop out. It looks like not processed packets are flushed on Linux, but are not macOS.
Hmmm... usually incoming UDP packets are discarded if the UDP receive buffer is full. Are you saying that this is not the case on macOS?
Yeah, they are discarded, too. From what I experience, the receive buffer is much larger on macOS, ~400kB? Check 'lag' in the udp-rate- test.pd patch. Packets start to be dropped when 'lag' reaches ~16,000. Since the payload is 12 bytes (+ 16 bytes UDP header): 28 x 16000 = 448000
The maximum 'lag' on Linux is 236, which would indicate a buffer of around 6kB.
Roman
[1] https://git.iem.at/pd/iemnet/uploads/1137d95137f1ddcdcedb9df15bdbb591/udp-ra...
On Linux, the receive buffer seems to be ~4kB, you as you already stated.
Where did I state that? :-) I don't have access to a Linux machine right now, but I *think* on many systems the default is much higher, something around 64 kB. You can check the system wide default receive buffer size with "sysctl net.core.rmem_default". It's possible to override it per socket with setsockopt() + SO_RCVBUF.
BTW, on Windows the receive buffer size is 8 kB - which is quite low.
The GUI slugginess also only occurs when using JACK. Maybe GUI updates are handled by the same polling function?
Yes, check the source code of sys_pollgui(). GUI updates are only sent when there are no sockets to *read* from. Until recently, you could completely freeze the Pd GUI by sending a continuous fast stream of network data to Pd, because Pd would only receive a single packet per DSP tick (see https://github.com/pure-data/pure-data/issues/55). Now this doesn't happen anymore with the Portaudio backend, because we repeatedly poll sockets while we're idle. Miller has also added some logic to sys_pollgui() which makes sure that GUI messages are sent at least every 0.5 seconds. This is probably what you experience with the Jack backend.
Trying again: For the typical applications UDP is used for, it's probably desirable that the receive buffer is not too large. If the buffer is large and the incoming rate is exceeds the processing capacity, you get large delays. Often (mostly?), fresh packets are more interesting than older ones. Of course, there is a trade-off between avoiding packet loss and keeping latency short.
Decreasing the buffer size and using the resulting packet loss as some kind of rate limiting sounds like a bad idea to me.
Generally, you want to avoid packet loss at the UDP receive buffer by any means. Note that UDP packets can arrive in bulks, e. g. because of buffering in network links. If the receive buffer was too small, you would lose packets even if you could process them (over time) just fine.
Here are two strategies to avoid packet loss (which can also be combined):
a) increase the receive buffer to match the max. expected bandwidth * latency. For example, if you expect up to 1 MB/s of incoming traffic and the receive thread can block up to 0.01 seconds due to packet processing, the receive buffer should be 10 KB.
b) make sure to drain the UDP receive buffer as fast as possible. Many applications would use a dedicated thread that just receives incoming datagrams and pushes them to the application's main thread for further processing. One example is Supercollider.
Now let's assume you really need to apply some rate limiting, e.g. because you don't want to overload the audio thread in your audio application. Instead of simply counting the number of received packets, it often makes more sense to look at the packet content and decide based on the application type!
For example, you could have dedicated buffers for each remote endpoint. To ensure more fairness, you wouldn't restrict packet processing to just N packets per second, but rather process up to M packets *per endpoint* per second.
Another example: If you receive OSC bundles with timestamps, it would be totally fine to receive a bulk of 100 bundles because you just have to put them on a priority queue. You would only need to discard bundles if there are two many of them for a given timestamp!
Sometimes, different types of messages might have different priorities, so you could put them on dedicated queues. For time critical messages, you can minimize latency by using a short fixed size buffer (and drop messages on overflow). For messages that are not urgent the buffer can be larger or even unbounded.
Also, certain types of data are more redundant than others. With continuous data streams, like from an accelerometer sensor, you can drop every Nth packet without losing too much information. This is not true for individual command messages, which are either received or lost.
In other cases, like audio streams, you know exactly how many packets per second you must expect. You would rather put a limit on the number of audio streams and instead of starting to randomly drop invidual packets from all streams.
This should just demonstrate that rate limiting can be implemented in many different ways and that it is wrong to assume that "fresh" packets are automatically more relevant than older ones.
Hmmm... usually incoming UDP packets are discarded if the UDP receive buffer is full. Are you saying that this is not the case on macOS?
Yeah, they are discarded, too.
Thanks for verifying! I would have been quite surprised otherwise.
Christof
On Wed, 2021-07-28 at 03:51 +0200, Christof Ressi wrote:
On Linux, the receive buffer seems to be ~4kB, you as you already stated.
Where did I state that? :-)
From: https://lists.puredata.info/pipermail/pd-list/2021-07/129893.html
"Back to TCP vs UDP: let's say you are sending 32-byte FUDI messages at a high rate. With the old behavior, TCP allowed to receive and dispatch 128 messages in a row (4096 bytes in total), but UDP only allowed a single message. For me, this didn't make any sense. With the new behavior you get the same number of messages."
I thought that 4096 bytes applied to the receive buffer of both, TCP and UDP, but that's probably not what you meant.
I don't have access to a Linux machine right now, but I *think* on many systems the default is much higher, something around 64 kB. You can check the system wide default receive buffer size with "sysctl net.core.rmem_default". It's possible to override it per socket with setsockopt() + SO_RCVBUF.
$ sysctl net.core.rmem_default net.core.rmem_default = 212992
That is not consistent with the maximum 'lag' of 236 packets the udp- rate-test patch measures. The patch sends FUDI messages with a size 12 bytes.
Decreasing the buffer size and using the resulting packet loss as some kind of rate limiting sounds like a bad idea to me.
[...]
Thanks for the detailed explanation. I agree on all points. The thing is that if the incoming rate exceeds that processing capacity, there is no other way for a patch than to look at already quite old packets. There is no way for a patch to say: "From all available packets in the buffer, give me the most recent one".
OTOH, maybe once the incoming rate is not "artificially" limited to one-packet-per-tick anymore, this might be a non-issue.
On 28.07.2021 13:42, Roman Haefeli wrote:
On Wed, 2021-07-28 at 03:51 +0200, Christof Ressi wrote:
On Linux, the receive buffer seems to be ~4kB, you as you already stated.
Where did I state that? :-)
From: https://lists.puredata.info/pipermail/pd-list/2021-07/129893.html
"Back to TCP vs UDP: let's say you are sending 32-byte FUDI messages at a high rate. With the old behavior, TCP allowed to receive and dispatch 128 messages in a row (4096 bytes in total), but UDP only allowed a single message. For me, this didn't make any sense. With the new behavior you get the same number of messages."
I thought that 4096 bytes applied to the receive buffer of both, TCP and UDP, but that's probably not what you meant.
Yes, I was rather referring to the buffer used for the recv() call, which currently has a size of 4096 bytes. This is the limit of how many data we can receive per socket in a single sys_pollgui() / sys_microsleep() call, otherwise these functions might block indefinitely during heavy incoming network traffic. The exact value of the throttle limit is somewhat arbitrary. In practice it doesn't matter too much if the poll functions are called repeatedly, like in the portaudio backend. In the Jack backend, however, the poll function is called only once per DSP tick, so the throttle limit becomes noticable. Anyway, it is completely independent from the UDP receive buffer.
OTOH, maybe once the incoming rate is not "artificially" limited to one-packet-per-tick anymore, this might be a non-issue.
Yes, I think so too. Of course, it is still possible to flood Pd with more messages than it can handle. One thing to watch out is the combination of a large hardware buffer size and a small UDP receive buffer, because Pd might be busy computing audio for several milliseconds - during which only a few packets can be received. If many packets arrive in this time window, the UDP receive buffer can overflow, leading to packet loss.
Another potential source of temporary packet loss are long blocking operations, like reading a large file from disk, during which no packets can be received, either.
Maybe it could also be helpful to allow users to tune the UDP receive buffer size for [netsend -u] resp. [netsend -u -b]. If you had a Pd patch that expects very high incoming UDP traffic, you could then increase the buffer size accordingly.
Christof