hi developers ...
currently pd in -rt mode tries to lock the memory to the physical memory ... this is done using the MCL_FUTURE policy, which implies that all the memory that will be allocated in future will be placed in the physical memory ... at the moment this is working fine, but when hunting random segfaults in the threaded soundfiler, i found this note in the mlockall man-page:
If MCL_FUTURE has been specified and the number of locked pages exceeds the upper limit of allowed locked pages, then the system call which caused the new mapping will fail with ENOMEM. If these new pages have been mapped by the the growing stack, then the kernel will deny stack expansion and send a SIGSEGV.
to me this seems to be the reason for the soundfiler problems, but it can reduce the stability of a working patch, so i'd suggest to find ways to solve this problem ...
possible ideas: -run mlockall(MCL_CURRENT) every once in a while ... maybe after a new object has been created or each second, 10 seconds, 1e+(add a number here) a seconds ...
-lock the specific memory region while allocating in getbytes or getalignedbytes
-on linux it might be possible to switch of overcommitment of memory in the kernel via /proc/sys/vm/overcommit_memory (don't know if this really does, what i think it's doing)
imo, MCL_FUTURE shouldn't be used to avoid this source of error ... although i'm not sure how to deal with it best, i'd prefer mlock instructions during memory allocation ...
cheers ... tim
If MCL_FUTURE has been specified and the number of locked pages exceeds the upper limit of allowed locked pages, then the system call which caused the new mapping will fail with ENOMEM. If these new pages have been mapped by the the growing stack, then the kernel will deny stack expansion and send a SIGSEGV.
to me this seems to be the reason for the soundfiler problems, but it can reduce the stability of a working patch, so i'd suggest to find ways to solve this problem ...
what sets the upper limit actually?
what sets the upper limit actually?
from what i understand there is no actual limitation in memory ...
i found this posting on the linux kernel mailing list by Chris Friesen:
and that the apps may segfault if they try to write to newly allocated memory and there is no more left. It will still be possible to segfault on newly allocated stack as well.
so it depends on _when_ it's written to the memory ...
cheers ... tim
On Wed, Oct 20, 2004 at 12:29:36PM +0200, Tim Blechmann wrote:
what sets the upper limit actually?
from what i understand there is no actual limitation in memory ...
i found this posting on the linux kernel mailing list by Chris Friesen:
and that the apps may segfault if they try to write to newly allocated memory and there is no more left. It will still be possible to segfault on newly allocated stack as well.
so it depends on _when_ it's written to the memory ...
cheers ... tim
it seems this indeed can only be solved by allocating everything in advance.. so what you suggest might be a good idea.
maybe a special-purpose malloc would do the trick?
On Wed, Oct 20, 2004 at 11:26:56AM +0200, Tom Schouten wrote:
On Wed, Oct 20, 2004 at 12:29:36PM +0200, Tim Blechmann wrote:
what sets the upper limit actually?
from what i understand there is no actual limitation in memory ...
i found this posting on the linux kernel mailing list by Chris Friesen:
and that the apps may segfault if they try to write to newly allocated memory and there is no more left. It will still be possible to segfault on newly allocated stack as well.
so it depends on _when_ it's written to the memory ...
cheers ... tim
it seems this indeed can only be solved by allocating everything in advance.. so what you suggest might be a good idea.
maybe a special-purpose malloc would do the trick?
i mean, one that does not free the pages to the kernel. like pdp works :)
it seems this indeed can only be solved by allocating everything in advance.. so what you suggest might be a good idea.
maybe a special-purpose malloc would do the trick?
i mean, one that does not free the pages to the kernel. like pdp works :)
that might be a solution ... or not allocating everything in advance, but using a helper thread that keeps a few mb allocated ... for the threaded soundfiler a few mb wouldn't be enough, since it works on a second array ...
the easy approach would be to mlock the parts of memory after allocating ... the difficult approach to write a rt-save malloc ...
from what i understand the memory allocation in pdp, you keep track of the already allocated memory ... i'm not sure, if this would solve our problem ... of course there are many allocations of the same size, but for other, especially big audio arrays, it will be very unlikely to reuse the memory ... and that's exactly the point, where the threaded soundfiler had it's problems ...
btw, is the function setrlimit() only available on linux or also on osx and win32?
cheers ... tim
On Wed, Oct 20, 2004 at 03:39:48PM +0200, Tim Blechmann wrote:
it seems this indeed can only be solved by allocating everything in advance.. so what you suggest might be a good idea.
maybe a special-purpose malloc would do the trick?
i mean, one that does not free the pages to the kernel. like pdp works :)
that might be a solution ... or not allocating everything in advance, but using a helper thread that keeps a few mb allocated ... for the threaded soundfiler a few mb wouldn't be enough, since it works on a second array ...
the easy approach would be to mlock the parts of memory after allocating ... the difficult approach to write a rt-save malloc ...
always go for the difficult solution Paul Graham would say. :)
from what i understand the memory allocation in pdp, you keep track of the already allocated memory ... i'm not sure, if this would solve our problem ...
no. in pdp it is simple, because all the packets (or most of them) have the same size. there is a reuse queue for each packet type. so this does not work if you use a lot of types.
of course there are many allocations of the same size, but for other, especially big audio arrays, it will be very unlikely to reuse the memory ... and that's exactly the point, where the threaded soundfiler had it's problems ...
indeed. it's a problem that needs to be tackled at the root: malloc
real time memory allocation is pretty damn difficult to do.. hence real time GC is something most people avoid.
all the other things are workarounds for special cases. some might work now, but will probably bite back later.
always go for the difficult solution Paul Graham would say. :)
debugging is always twice as difficult as writing ... so if you go for the difficult solution, you might not be able to debug if any more ;-)
it's a problem that needs to be tackled at the root: malloc
real time memory allocation is pretty damn difficult to do.. hence real time GC is something most people avoid.
all the other things are workarounds for special cases. some might work now, but will probably bite back later.
hm ... i'll try to find a few books on memory management and garbage collection at the library tomorrow ...
cheers ... t