Am Montag, 18. April 2005 12:21 schrieb Tim Blechmann:
I think since pd use a internal blocksize of 64 und you need at least two of them, you have to reduce the pd blocksize to get latencies below 128 Buffersize. (see and please correct http://puredata.info/Members/ritsch/latency http://puredata.info/Members/ritsch/latency/pd_structure )
actually, i don't see, why you need buffer sizes of 2 blocksizes ...
I din't say "you need", I just sayed (maybe it changed), that pd is implemented with
[...] int sys_advance_samples; /* scheduler advance in samples */ [...] /* exported variables */ int sys_schedadvance; /* scheduler advance in microseconds */ float sys_dacsr;
t_sample *sys_soundout; t_sample *sys_soundin; [...]
which means there is an additional sys buffer, which could be bigger than the Soundcardbuffer. I always thought pd needs at least 2 blocksize for this buffer, but I had to look at the code if this is still true.
..., also its not so bad to reduce the blocksize for lowest latency applications, since you just get some overhead for function calling, but not much if you go down to blocksize 16 and then the idle time is distributed better.
Thanks for the nice explanation, i put it somewhere on my latencies text collection.
mfg winfried
[...]
basically there are two ways to design a scheduler, the synchronous and the callback-driven schedulers:
for the synchronous scheduler (which is currently the main scheduler), dsp will be computed in the main scheduler thread. the callback from the audio hardware just copies data from and to the in buffers and out buffers:
- copy data to audio hardware copy data from audio hardware
- compute dsp
- same as step one. at the time of the next callback
this basically means, that audio data is being copied to the pd thread, the pd thread is computing dsp and at the time of the next callback, the processed audio data is copied to the outlets.
for the callback-driven scheduler (which i implemented for native asio and jack in devel), the dsp is computed not in pd's main thread, but in the callback thread. as far as latency concerns it performs better:
- the callback copies data from the audio hardware to the input buffer
- the callback thread computes dsp
- the callback thread copies the data to the audio hardware again, without waiting for the next callback
both schedulers have good and bad features...
- the callback driven scheduler reduces the latency by one block size
- the callback driven scheduler is working in a second thread, which means, the sys_lock() has to be locked... on the other hand, locking a mutex in an realtime thread is not a very good idea ... this may cause problems, if the cpu load is very high and there is a lot of messaging / gui activity. other options to stay threadsafe would be to compute control data at downsampled audio rate (see supercollider) or to rewrite _all_ dsp objects to be threadsafe by using lock-free algorithms
still, i don't see the point, why either of these schedulers should require blocksizes that are twice as big as pd's internal block size, since it's just the question, if where in the dsp scheduling we wait ...
cheers ... tim