Thomas,
Thanks a lot! Wow, yes that seems to be the case.
Now having some mysteries solved, I can go get some sleep in comfort. Seriously, you might have just saved my life :)
While you are there, can you please tell me just a bit more?
So, I've made a cache to copy the original signal to before processing.
Because the block size may change anytime, I am malloc & freeing a cache the size of the block on each DSP cycle.
I hear that malloc is a "relatively" expensive task. Is it bad practice to run this each cycle, or is a kilobyte or two not a bid deal?
-- David Shimamoto
Hi David,
Am 14.06.2008 um 03:08 schrieb PSPunch:
===================== == PROCESS BLOCK.2 == =====================
while (n--) { // *out++ = *in++; }
Remarks: Action is commented out but signal goes through.. Why?
that's because in and out can point to the same memory... signal vectors are reused in PD for cache-friendlyness.
===================== == PROCESS BLOCK.4 == =====================
n--; *out++ = 0; while (n--) { *out++ = *in++; }
Remarks: Expecting first sample of the block to be zero and others delayed by 1 sample. Instead, I get an constant output of zero.
As above... you have to be aware that when you are writing to the output, you change the input. Either cache the input or use a different algorithm (in this case start from the end)
gr~~~
Am 14.06.2008 um 18:37 schrieb PSPunch:
So, I've made a cache to copy the original signal to before
processing.Because the block size may change anytime, I am malloc & freeing a
cache the size of the block on each DSP cycle.I hear that malloc is a "relatively" expensive task. Is it bad practice to run this each cycle, or is a kilobyte or two
not a bid deal?
That's definitely bad practice.
Instead of that you can do the allocation in the "dsp" callback,
that's where you add your dsp processing to the signal chain. This
callback will be called anytime when either the block size or sample
rate changes or when the signal graph is rebuilt.
gr~~~
On Sat, Jun 14, 2008 at 5:15 PM, Thomas Grill gr@grrrr.org wrote:
Am 14.06.2008 um 18:37 schrieb PSPunch:
That's definitely bad practice. Instead of that you can do the allocation in the "dsp" callback, that's where you add your dsp processing to the signal chain. This callback will be called anytime when either the block size or sample rate changes or when the signal graph is rebuilt.
What is the dsp callback? This is new to me, too.
I had a different approach when it comes to building static arrays that depend on block size. I used three variables in the struct, x->n, x->array_pointer, and x->is_new_or_resized
Then, example use in perform routine:
if (x->n != n) x->is_new_or_resized=1; if (x->is_new_or_resized) { x->is_new_or_resized=0; if (x->array_pointer != NULL) free(x->array_pointer); malloc(x->array_pointer, n); x->n=n; }
So, you can see the problem. It would be useful to know another way.
Chuck
gr~~~
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Sun, 15 Jun 2008, PSPunch wrote:
I hear that malloc is a "relatively" expensive task.
It's mostly just OSX's malloc that is obscenely expensive beyond a certain size. But that threshold is more like 16k or so. On Linux, it's 128k instead, but if both thresholds were the same, you'd see that Linux takes this change well, whereas OSX does not.
The threshold corresponds to when malloc switches from doing its own memory management, to just delegating its job to the kernel.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801, Montréal, Québec
Hi, Mathieu
Thank you for the info.
It's mostly just OSX's malloc that is obscenely expensive beyond a certain size. But that threshold is more like 16k or so. On Linux, it's 128k instead, but if both thresholds were the same, you'd see that Linux takes this change well, whereas OSX does not.
Is this something you would learn only from studying the Linux source, or is it a fact discussed fairly often?
I would appreciate it if you can forward me to any online resources where this is mentioned (Regarding Linux... OSX is out of my scope for the moment)
-- David Shimamoto
On Sun, 22 Jun 2008, PSPunch wrote:
It's mostly just OSX's malloc that is obscenely expensive beyond a certain size. But that threshold is more like 16k or so. On Linux, it's 128k instead, but if both thresholds were the same, you'd see that Linux takes this change well, whereas OSX does not.
Is this something you would learn only from studying the Linux source, or is it a fact discussed fairly often?
No, this is part of glibc. This is where malloc() is defined. I did not study glibc, I probed it with a benchmark. I used the same programme to probe the malloc() on OSX.
I would appreciate it if you can forward me to any online resources where this is mentioned (Regarding Linux... OSX is out of my scope for the moment)
I did not use any online resources, so, I don't know any.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801, Montréal, Québec