> I get at least four threads. Just for some context: here on Windows I get 10 threads when I
open Pd and start DSP, but only 2 of these are active and the
remaining 8 or idle.
Yeah, that's just the way that Windows rolls for any program that has a message loop. It's also different when you run under a debugger than when you run standalone. Why yes, I do get paid not enough to do deep Windows debugging, why do you ask?
> The Pd core itself does not spawn any threads, only the audio
backend and certain objects/externals do (notably [readsf~] and
[writesf~]).
I was aware of that as the standard story, which is why I was surprised to see four threads, and that it was in (relatively) invariant even when you included to more 'primitive' back-ends like OSS. It seems like this is a bit of lore that should be known, if not particularly documented anywhere.
> But why do you care about the number of threads in the first
place?
Because I am working on code which is trying to handle some of the *other* JACK data streams. Ambiguity in thread functionality makes for ambiguity in debugging.
>> As an aside: is the code in
z_ringbuffer.{c,h} considered trustworthy? I note that the other
code in PD
>> appears to use the sys_ringbuffer* API, which seems
to be built on the PA ringbuffer.
> Is the PA ringbuffer considered trustworthy?
Well it is *in use*, which means that *somebody* considers it trustworthy (in a multi-threaded context). z_ringbuffer appears to be deployed in pretty much only one context which appears to be pretty much isolated to always being in the same thread. But see my above question about threading. I had debugging artifacts which looked like z_ringbuffer was behaving badly under thread race conditions, so I looked at the code, and I am pretty sure that z_ringbuffer's use of atomics is actually incorrect. I haven't yet taken the time to prove this as the z_ringbuffer call sites are very limited.
> Note that the
ringbuffer code in "s_audio_ringbuf.c" - for whatever reason - is
missing all the memory
> barriers from the original PA
implementation. This happens to work as the implementation is in
another
> source file and (non-inline) function calls act as
compiler barriers and Intel has a strong memory model,
> but if
compiled with LTO this code may very well fail on other platforms,
particularly on ARM.
Yeah, I saw that. It's actually *worse* because the header file multi-include protection uses the same preprocessor symbol as the JACK ringbuffer implementation. I fixed that in my local git repo. How do I know this? After a light code read, I switched to using the JACK ringbuffer implementation, which I *do* trust.
>> I ask because I had some problems with
z_ringbuffer.c and after a code read, there are some
>> bits which
look sketchy enough to me that I decided to stop using it.
> Which problems did you have? And which bits look sketchy? There
are some things that could be
> improved. The original code has been
written before C11, i.e. before C/C++ got an official memory
> model. As a consequence, the platform specific atomic instructions
/ memory barriers are stronger
> than required. In general,
SYNC_FETCH should really be called SYNC_LOAD and
SYNC_COMPARE_AND_SWAP
> should be called SYNC_STORE. With C11,
SYNC_LOAD could be just an atomic_load (with
> memory_order_acquire)
and SYNC_STORE could be an atomic_store (with
memory_order_release).
> Apart from that, the code looks fine to me.
As I said above, I am pretty sure that z_ringbuffer's use of atomics is actually incorrect. I haven't yet taken the time to prove this as the z_ringbuffer call sites are very limited, and it clearly works in context. I don't think it will work in a more difficult context. The last time I worked with atomics, I ended up writing an automated proof checker to make sure that all of the cases worked correctly. I work with PD when I want to make music more than I want to do advanced CS. And then there's the question of the marginal utility of PD having its own implementation of a ringbuffer, but I will leave that for all of you who have dedicated far more time than I to the maintenance of this project.
- c&co