something that is really slowing down my work recently:
in some patches i need to have hundreds of copies of the same abstraction. to save just one copy of the abstraction would only take a fraction of a second, but when i have many hundreds of copies, pd is really stumbling to get these abstractions all renewed to the new saved version. it seems to be some kind of bottleneck situation, where the time taken to re-initialize these abstractions together is much longer than it would be to initialize them all seperately.
in some cases it gets so bad that the only option i have is to close my master patch, open up a single version of the abstraction needed, edit and save it, and then re-open the master patch. kinda messy workaround though.
is this a known flaw? are there any good workarounds?
Hi, this is because PD maintains it's internal structures using linked lists which gives a very bad behavior with many structure items. It has been discussed on the list a couple of times, but there's no solution at hand. You could try to arrange your load of abstractions in subpatchers, with only a few items in each of them. Take the square root of your anticipated total number of abstractions. This gives you the ideal number of sub-patchers as well as the ideal number of abstractions in each of those.
gr~~~
hard off schrieb:
something that is really slowing down my work recently:
in some patches i need to have hundreds of copies of the same abstraction. to save just one copy of the abstraction would only take a fraction of a second, but when i have many hundreds of copies, pd is really stumbling to get these abstractions all renewed to the new saved version. it seems to be some kind of bottleneck situation, where the time taken to re-initialize these abstractions together is much longer than it would be to initialize them all seperately.
in some cases it gets so bad that the only option i have is to close my master patch, open up a single version of the abstraction needed, edit and save it, and then re-open the master patch. kinda messy workaround though.
is this a known flaw? are there any good workarounds?
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Thu, 10 Jan 2008, Thomas Grill wrote:
this is because PD maintains it's internal structures using linked lists which gives a very bad behavior with many structure items.
I don't think that the use of linked lists so far has caused any significant problem. The problem lies more in things like recompiling the DSP many times in a row without any use for it, and such.
Linked list problems can appear if you try to use a few hundred inlets in the same object, or a few thousands objects (or maybe less). Creating N [inlet]s dynamically takes O(n^3) time... it's as much a question of algorithms and reducing the number of updates, as it is a matter of linked lists. In that case you can reduce to O(n^2 log n) by using a better sort algorithm, but you can reduce to O(n^2) if you just sort less often, O(n log n) if you do both, and O(1) if Pd wasn't relying on the x-position of [inlet] objects in the first place.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada