(moving to pd-dev)
i just tested the alsa interface on my hdsp ... basically it works fine for large buffer sizes ... but there are a few points:
- i can't get below 2902 us of latency with 44.1 kHz / 2666 us with 48 kHz ... so basically a period size of 128 ... the hdsp is capable of 64 - using these lowest latencies, i experience glitches: 44.1 kHz: every once in a while (about one every 5 seconds) 48 kHz: strong glitches for about 500 ms followed by about 3 ms more or less clean sound 88.2 kHz: continuous glitches 96 kHz: similar to 48 kHz, but scaled by factor 2
on the other hand: - jack runs with 64 samples on my machine without any problems (at 44.1 and 48 kHz only) - the lowest buffer size for 96kHz on my machine is 128 (64 will kill jackd when connecting)
all these test where done on 2.6.12-rc2-mm1, devel_0_38, using realtime mode for pd / jack ... for jack, i wasn't using the callback-based scheduler. basically the callback-based scheduler will reduce the latency by one buffer size, because the input data will be directly processed without any additional buffering ...
cheers ... tim
i just tested the alsa interface on my hdsp ... basically it works fine for large buffer sizes ... but there are a few points:
- i can't get below 2902 us of latency with 44.1 kHz / 2666 us with 48 kHz ... so basically a period size of 128 ... the hdsp is capable of 64
- using these lowest latencies, i experience glitches: 44.1 kHz: every once in a while (about one every 5 seconds) 48 kHz: strong glitches for about 500 ms followed by about 3 ms more or less clean sound 88.2 kHz: continuous glitches 96 kHz: similar to 48 kHz, but scaled by factor 2
I think since pd use a internal blocksize of 64 und you need at least two of them, you have to reduce the pd blocksize to get latencies below 128 Buffersize. (see and please correct http://puredata.info/Members/ritsch/latency http://puredata.info/Members/ritsch/latency/pd_structure )
mfg winfried
I think since pd use a internal blocksize of 64 und you need at least two of them, you have to reduce the pd blocksize to get latencies below 128 Buffersize. (see and please correct http://puredata.info/Members/ritsch/latency http://puredata.info/Members/ritsch/latency/pd_structure )
actually, i don't see, why you need buffer sizes of 2 blocksizes ...
basically there are two ways to design a scheduler, the synchronous and the callback-driven schedulers:
for the synchronous scheduler (which is currently the main scheduler), dsp will be computed in the main scheduler thread. the callback from the audio hardware just copies data from and to the in buffers and out buffers:
1. copy data to audio hardware copy data from audio hardware 2. compute dsp 3. same as step one. at the time of the next callback
this basically means, that audio data is being copied to the pd thread, the pd thread is computing dsp and at the time of the next callback, the processed audio data is copied to the outlets.
for the callback-driven scheduler (which i implemented for native asio and jack in devel), the dsp is computed not in pd's main thread, but in the callback thread. as far as latency concerns it performs better:
1. the callback copies data from the audio hardware to the input buffer 2. the callback thread computes dsp 3. the callback thread copies the data to the audio hardware again, without waiting for the next callback
both schedulers have good and bad features... - the callback driven scheduler reduces the latency by one block size - the callback driven scheduler is working in a second thread, which means, the sys_lock() has to be locked... on the other hand, locking a mutex in an realtime thread is not a very good idea ... this may cause problems, if the cpu load is very high and there is a lot of messaging / gui activity. other options to stay threadsafe would be to compute control data at downsampled audio rate (see supercollider) or to rewrite _all_ dsp objects to be threadsafe by using lock-free algorithms
still, i don't see the point, why either of these schedulers should require blocksizes that are twice as big as pd's internal block size, since it's just the question, if where in the dsp scheduling we wait ...
cheers ... tim
Am Montag, 18. April 2005 12:21 schrieb Tim Blechmann:
I think since pd use a internal blocksize of 64 und you need at least two of them, you have to reduce the pd blocksize to get latencies below 128 Buffersize. (see and please correct http://puredata.info/Members/ritsch/latency http://puredata.info/Members/ritsch/latency/pd_structure )
actually, i don't see, why you need buffer sizes of 2 blocksizes ...
I din't say "you need", I just sayed (maybe it changed), that pd is implemented with
[...] int sys_advance_samples; /* scheduler advance in samples */ [...] /* exported variables */ int sys_schedadvance; /* scheduler advance in microseconds */ float sys_dacsr;
t_sample *sys_soundout; t_sample *sys_soundin; [...]
which means there is an additional sys buffer, which could be bigger than the Soundcardbuffer. I always thought pd needs at least 2 blocksize for this buffer, but I had to look at the code if this is still true.
..., also its not so bad to reduce the blocksize for lowest latency applications, since you just get some overhead for function calling, but not much if you go down to blocksize 16 and then the idle time is distributed better.
Thanks for the nice explanation, i put it somewhere on my latencies text collection.
mfg winfried
[...]
basically there are two ways to design a scheduler, the synchronous and the callback-driven schedulers:
for the synchronous scheduler (which is currently the main scheduler), dsp will be computed in the main scheduler thread. the callback from the audio hardware just copies data from and to the in buffers and out buffers:
- copy data to audio hardware copy data from audio hardware
- compute dsp
- same as step one. at the time of the next callback
this basically means, that audio data is being copied to the pd thread, the pd thread is computing dsp and at the time of the next callback, the processed audio data is copied to the outlets.
for the callback-driven scheduler (which i implemented for native asio and jack in devel), the dsp is computed not in pd's main thread, but in the callback thread. as far as latency concerns it performs better:
- the callback copies data from the audio hardware to the input buffer
- the callback thread computes dsp
- the callback thread copies the data to the audio hardware again, without waiting for the next callback
both schedulers have good and bad features...
- the callback driven scheduler reduces the latency by one block size
- the callback driven scheduler is working in a second thread, which means, the sys_lock() has to be locked... on the other hand, locking a mutex in an realtime thread is not a very good idea ... this may cause problems, if the cpu load is very high and there is a lot of messaging / gui activity. other options to stay threadsafe would be to compute control data at downsampled audio rate (see supercollider) or to rewrite _all_ dsp objects to be threadsafe by using lock-free algorithms
still, i don't see the point, why either of these schedulers should require blocksizes that are twice as big as pd's internal block size, since it's just the question, if where in the dsp scheduling we wait ...
cheers ... tim
[...] int sys_advance_samples; /* scheduler advance in samples */ [...] /* exported variables */ int sys_schedadvance; /* scheduler advance in microseconds */ float sys_dacsr;
t_sample *sys_soundout; t_sample *sys_soundin; [...]
which means there is an additional sys buffer, which could be bigger than the Soundcardbuffer. I always thought pd needs at least 2 blocksize for this buffer, but I had to look at the code if this is still true.
i just checked the code:
sys_advance_samples = (sys_schedadvance * sys_dacsr) / (1000000.); if (sys_advance_samples < 3 * sys_dacblocksize) sys_advance_samples = 3 * sys_dacblocksize;
this seems to be, what you where referring to ... still, i don't see the reason for this. neither native asio nor jack use this ... with jack down to 64 samples ... i don't think this has any technical reason, only prevents the user from specifying lowest latencies ...
actually, i tested your latency-test patch on my machine ... the results are: jack (cb scheduler): 264 samples jack (synchronous): 456 samples native alsa: 200 samples (period size of 128, 96000 kHz)
still, native alsa has some timing issues on my machine, resulting in gliches, so i can't use it with block sizes lower than 256 at 96000 kHz, which results in a latency of 328 samples... (can someone confirm that?)
but i'm pretty curious about the alsa implementation (i mean, a callback driven alsa scheduler could reduce the latency even more) ...
cheers ... tim
Hallo,
[...] int sys_advance_samples; /* scheduler advance in samples */ [...] /* exported variables */ int sys_schedadvance; /* scheduler advance in microseconds */ float sys_dacsr;
t_sample *sys_soundout; t_sample *sys_soundin; [...]
which means there is an additional sys buffer, which could be bigger than the Soundcardbuffer. I always thought pd needs at least 2 blocksize for this buffer, but I had to look at the code if this is still true.
i just checked the code:
sys_advance_samples = (sys_schedadvance * sys_dacsr) / (1000000.); if (sys_advance_samples < 3 * sys_dacblocksize) sys_advance_samples = 3 * sys_dacblocksize;
this seems to be, what you where referring to ... still, i don't see the reason for this. neither native asio nor jack use this ... with jack down to 64 samples ... i don't think this has any technical reason, only prevents the user from specifying lowest latencies ...
The reason I think is historical and going back to the ISPW, 2 i860 cpus on an extra card running fts where you can distribute working load to dedicated cpus udn need to have latencies because of signal distribution on shared memory,....
Nevertheless I think since all messages are calculated before the dsp-stuff each tick, the chance to have a FFT-Border and a lot of messages triggered eg. by bang~ at the sametime can easily exceed the buffer size so its better if you distribute the message calculation in finer grains, therefore its good to have at least 2 (3) dacblocks as a sysbuffer, even if you use a 64sample buffer on hardware. If you really need (it will be nice you write down a Use-case) and want to go unter say 3*64 ca. 3.5ms, you its better to mach DACBLKSIZE smaller, which works on most and all well written objects, fine. Also be sure pd is never blocked less then time of a syssamples, which I did with increasing the scheduler rate from 100Hz to 10000 in ancient times ;-)
I think using higher samplerate is the same as reducing DACBLKSIZE to reduce latency time.
actually, i tested your latency-test patch on my machine ... the results are: jack (cb scheduler): 264 samples jack (synchronous): 456 samples
native alsa: 200 samples (period size of 128, 96000 kHz)
should be 2ms which is very small.
still, native alsa has some timing issues on my machine, resulting in gliches, so i can't use it with block sizes lower than 256 at 96000 kHz, which results in a latency of 328 samples... (can someone confirm that?)
I dont know if it depends on alsa mmap implementation, but code is not really test to much since when I got latency about 5ms I was already happy und i didnt need more.
but i'm pretty curious about the alsa implementation (i mean, a callback driven alsa scheduler could reduce the latency even more) ...
I dont exactly know what jack does, but since alsa makes direct memory access and can copy samples to buffer in smaller units than buffersize of hardware and also watches the write read pointer from rme hardware card (use precise_ptr on module load), you theoretically can go smaller than 64 samples on as hardware buffer ;-) so you should can go down until I think 8 samples (PCI Bridge Buffer with 26channels) , depends on pci bridge on computer.
mfg winfried
hi wini, hi devs
after some profiling, i figured out that the alsamm driver is burning a lot of cpu during the alsamm_send_dacs ... output of "opreport -l /usr/local/bin/pd"
CPU: CPU with timer interrupt, speed 0 MHz (estimated) Profiling through timer interrupt samples % symbol name 29630 38.6436 alsamm_send_dacs 5847 7.6257 tabosc4_tilde_perform 5578 7.2749 block_prolog 4451 5.8050 copyvec_simd 4362 5.6889 testaddvec_simd 3119 4.0678 oss_send_dacs 2019 2.6332 peakvec_simd 1577 2.0567 sighip_perform 1560 2.0346 dsp_tick 1410 1.8389 testcopyvec_simd 978 1.2755 sigthrow_perfsimd 973 1.2690 env_tilde_accum_simd 834 1.0877 zerovec_simd 780 1.0173 sys_getrealtime 698 0.9103 sys_domicrosleep 659 0.8595 plus_perf_simd <snip>
there are two loops that slow down the thing: : 5313 4.8734 : for (i = 0, fp2 = fp1 + chn*sys_dacblocksize; i < oframes; i++,fp2++) : { 2296 2.1060 : float s1 = *fp2 * F32MAX; : /* better but slower, better never clip ;-) : buf[i]= CLIP32(s1); */ 3278 3.0068 : buf[i]= ((int) s1 & 0xFFFFFF00); 1052 0.9650 : *fp2 = 0.0; : } : }
and
253 0.2321 : for (chn = 0; chn < ichannels; chn++) { : 60 0.0550 : t_alsa_sample32 *buf = (t_alsa_sample32 *) dev->a_addr[chn]; : 17254 15.8265 : for (i = 0, fp2 = fp1 + chn*sys_dacblocksize; i < iframes; i++,fp2++) : { : /* mask the lowest bits, since subchannels info can make zero samples nonzero */ 10438 9.5744 : *fp2 = (float) ((t_alsa_sample32) (buf[i] & 0xFFFFFF00)) : * (1.0 / (float) INT32_MAX); : } : }
the problem is, that the samples have to be transfered from the sse registers to the general purpose registers to do the bitmask operations:
: 80ba444: movaps %xmm2,%xmm1 845 0.7751 : 80ba447: movss (%edx),%xmm0 1451 1.3309 : 80ba44b: mulss %xmm1,%xmm0 311 0.2853 : 80ba44f: cvttss2si %xmm0,%eax 1262 1.1576 : 80ba453: xor %al,%al 1705 1.5639 : 80ba455: mov %eax,(%esi,%ecx,4) 1052 0.9650 : 80ba458: movl $0x0,(%edx) 4581 4.2020 : 80ba45e: add $0x1,%ecx 2 0.0018 : 80ba461: mov 0xffffffe8(%ebp),%ebx 664 0.6091 : 80ba464: add $0x4,%edx : 80ba467: cmp %ebx,%ecx 4 0.0037 : 80ba469: jl 80ba447 <alsamm_send_dacs+0x12c>
and
: 80ba68e: movaps %xmm2,%xmm1 4652 4.2671 : 80ba691: mov (%esi,%ecx,4),%eax 12579 11.5382 : 80ba694: add $0x1,%ecx : 80ba697: xor %al,%al 70 0.0642 : 80ba699: cvtsi2ss %eax,%xmm0 3665 3.3618 : 80ba69d: mulss %xmm1,%xmm0 2051 1.8813 : 80ba6a1: movss %xmm0,(%edx) 3737 3.4278 : 80ba6a5: add $0x4,%edx 888 0.8145 : 80ba6a8: mov 0xffffffe0(%ebp),%ebx 3 0.0028 : 80ba6ab: cmp %ebx,%ecx : 80ba6ad: jl 80ba691 <alsamm_send_dacs+0x376>
i think the better way would be to hardcode these two loops with sse instructions, at least for x86 ... not sure, if this is also a problem on the ppc platform ...
cheers... tim
Hello,
Thanks, I think we should change this in code, I didnt do optimation on this, just a copy and paste from previous code.
The main point is, that data transfer in send_dacs is to memory mapped region of soundcardbuffer and therefore no other copy is needed, like I think done in jack, so it includes the copy (and add) loops burned by jack.
Anyway if we have a smarter recognition which channels have really corresponding dac/adc devices we could zero out the other channels once and dont need the copy loop for them which will improve a lot, since on mmap we are forced to use all channels or none.
mfg winfried
after some profiling, i figured out that the alsamm driver is burning a lot of cpu during the alsamm_send_dacs ... output of "opreport -l /usr/local/bin/pd"
CPU: CPU with timer interrupt, speed 0 MHz (estimated) Profiling through timer interrupt samples % symbol name 29630 38.6436 alsamm_send_dacs 5847 7.6257 tabosc4_tilde_perform 5578 7.2749 block_prolog 4451 5.8050 copyvec_simd 4362 5.6889 testaddvec_simd 3119 4.0678 oss_send_dacs 2019 2.6332 peakvec_simd 1577 2.0567 sighip_perform 1560 2.0346 dsp_tick 1410 1.8389 testcopyvec_simd 978 1.2755 sigthrow_perfsimd 973 1.2690 env_tilde_accum_simd 834 1.0877 zerovec_simd 780 1.0173 sys_getrealtime 698 0.9103 sys_domicrosleep 659 0.8595 plus_perf_simd
<snip>
there are two loops that slow down the thing:
5313 4.8734 : for (i = 0, fp2 = fp1 + chn*sys_dacblocksize; i < oframes; i++,fp2++)
: {
2296 2.1060 : float s1 = *fp2 * F32MAX;
: /* better but slower, better never clip ;-) : buf[i]= CLIP32(s1); */
3278 3.0068 : buf[i]= ((int) s1 & 0xFFFFFF00); 1052 0.9650 : *fp2 = 0.0;
: } : }
and
253 0.2321 : for (chn = 0; chn < ichannels; chn++) {
60 0.0550 : t_alsa_sample32 *buf = (t_alsa_sample32 *)
dev->a_addr[chn];
17254 15.8265 : for (i = 0, fp2 = fp1 + chn*sys_dacblocksize; i < iframes; i++,fp2++)
: { : /* mask the lowest bits, since subchannels info : can make zero samples nonzero */
10438 9.5744 : *fp2 = (float) ((t_alsa_sample32) (buf[i] & 0xFFFFFF00))
: * (1.0 / (float) INT32_MAX); : } : }
the problem is, that the samples have to be transfered from the sse registers
to the general purpose registers to do the bitmask operations: : 80ba444: movaps %xmm2,%xmm1
845 0.7751 : 80ba447: movss (%edx),%xmm0 1451 1.3309 : 80ba44b: mulss %xmm1,%xmm0 311 0.2853 : 80ba44f: cvttss2si %xmm0,%eax 1262 1.1576 : 80ba453: xor %al,%al 1705 1.5639 : 80ba455: mov %eax,(%esi,%ecx,4) 1052 0.9650 : 80ba458: movl $0x0,(%edx) 4581 4.2020 : 80ba45e: add $0x1,%ecx 2 0.0018 : 80ba461: mov 0xffffffe8(%ebp),%ebx 664 0.6091 : 80ba464: add $0x4,%edx
: 80ba467: cmp %ebx,%ecx 4 0.0037 : 80ba469: jl 80ba447 <alsamm_send_dacs+0x12c>
and
: 80ba68e: movaps %xmm2,%xmm1
4652 4.2671 : 80ba691: mov (%esi,%ecx,4),%eax 12579 11.5382 : 80ba694: add $0x1,%ecx
: 80ba697: xor %al,%al 70 0.0642 : 80ba699: cvtsi2ss %eax,%xmm0
3665 3.3618 : 80ba69d: mulss %xmm1,%xmm0 2051 1.8813 : 80ba6a1: movss %xmm0,(%edx) 3737 3.4278 : 80ba6a5: add $0x4,%edx 888 0.8145 : 80ba6a8: mov 0xffffffe0(%ebp),%ebx 3 0.0028 : 80ba6ab: cmp %ebx,%ecx
: 80ba6ad: jl 80ba691 <alsamm_send_dacs+0x376>
i think the better way would be to hardcode these two loops with sse instructions, at least for x86 ... not sure, if this is also a problem on the ppc platform ...
cheers... tim
hi wini ...
well, after my last mail, i was looking at both code and instructions in more details...
it seems that the bottleneck is the memory mapped transfer from pd to the audio device ... i'm more or less clueless how to improve this ... possibly by using movaps or at least movups instructions to reduce the number of memory transfers ... basically the float / int conversion shouldn't be a big problem any more (thanks to the sse)
cheers ... tim