On Sun, 22 Jun 2008, PSPunch wrote:
It's mostly just OSX's malloc that is obscenely expensive beyond a certain size. But that threshold is more like 16k or so. On Linux, it's 128k instead, but if both thresholds were the same, you'd see that Linux takes this change well, whereas OSX does not.
Is this something you would learn only from studying the Linux source, or is it a fact discussed fairly often?
No, this is part of glibc. This is where malloc() is defined. I did not study glibc, I probed it with a benchmark. I used the same programme to probe the malloc() on OSX.
I would appreciate it if you can forward me to any online resources where this is mentioned (Regarding Linux... OSX is out of my scope for the moment)
I did not use any online resources, so, I don't know any.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801, Montréal, Québec