Things are a lot more encapsulated in SuperCollider, and it can afford to be more efficient because 1) the DSP flow doesn't need to be in a strict correspondence with what appears in a GUI graph, 2) the user is not responsible for the DSP flow unless they want to be and do it on purpose with head/tail instantiation or node numbering, and 3) there are only a few ways of passing information between ugens: directly as arguments to another ugen, variables, and buses. Data in an SC Synth is more protected from the outside than anything in Pd is (as abstractions in Pd are still in the global space despite the dollarsign locality tricks), and the bus system controls I/O to and from Synths much more strictly. Pd doesn't enforce any of that because of its pledge to keep things global; this makes it extremely flexible, but at the price of a potentially more convoluted DSP graph.
What would be interesting to know is whether there are best practices for patching that would help out the ugen graph routine.
On Sun, Sep 20, 2015 at 12:30 AM, Jonathan Wilkes jancsika@yahoo.com wrote:
Matt-- I don't believe that bug has been fixed.
Roman-- I haven't looked closely at the relevant code, but it looks like Pd recalculates the graph-- a single graph for the running instance of Pd-- every time you add/remove a tilde object. (Not sure about control objects, but it's easy to test.)
The reason I'm comfortable speculating about this is the existence of wireless tilde objects like [throw~] and [catch~] which use global receiver names. When you change an object inside a tiny patch with 100 other patches open in the same Pd instance, how would Pd know that you aren't altering a [throw~] which has a [catch~] in one of the 100 other patches? Same for [send~]/[receive~], [delwrite~]/[delread~]/[vd~], [table]/[tab*~], etc.
Here's the dsp_tick routine in d_ugen.c:
void dsp_tick(void) { if (dsp_chain) { t_int *ip; for (ip = dsp_chain; ip; ) ip = (*(t_perfroutine)(*ip))(ip); dsp_phase++; } }
That is-- execute each dsp routine in the global array of dsp routines until there are no more dsp routines to execute.
But this makes me wonder-- how does Supercollider "do its thing"? Seems like it has an interface to add/remove parts of its dsp graph, and it can do so in a much more efficient manner.
-Jonathan
On Saturday, September 19, 2015 10:56 PM, Matt Barber brbrofsvl@gmail.com wrote:
One more thing to think about is how the DSP graph is handled using dynamic patching. For a long time there was a "bug" where the last audio object added didn't trigger a recalculation and would be left out of the DSP graph until the next edit. Is this still the case? The workaround, if I remember correctly, was to add one last dummy object at the end of dynamic patching.
Matt
On Thu, Sep 17, 2015 at 5:55 PM, Roman Haefeli reduzent@gmail.com wrote:
Hi all
First, I'm not even sure if 'DSP graph' is the correct term. Pd's documentation[1] states that all DSP objects are internally arranged into a linear order which I believe is often called 'DSP graph'. There are apparently some actions that cause this DSP graph to be rebuilt. Rebuilding takes time and is often the cause of audio drop-outs. I would like to have a better understanding of the mechanics going on behind the scenes with the hope to be able to optimize my Pd programming.
One thing I'd like to know: Is there one graph for all patches in a certain instance of Pd? It seems that adding a tilde-object to a patch causes the DSP graph to be recalculated. Now, if _everything_ is in the same graph, this would mean the whole graph needs to be recalculated when adding objects (or abstractions containing tilde-objects, for that matter), no matter where I put them. It would make no difference whether I have one big patch with 1000 tilde-objects loaded or 100 smaller patches with 10 tilde-objects each, when adding new objects, would it? Is the time it takes to recalculate the graph only dependent on the number of tilde-objects running in the current instance of Pd? If so, is that a linear correlation? 10 times more tilde-objects means it takes 10 times as long to recalculate the graph? Or is it even exponential? There is no way to partition the graph and update only one partition, is there?
On a related note, I made the following observation and I'm wondering if/how that is related to the DSP graph: I create a minimalist patch with a small [table foo 100] and I measure the time it takes to 'resize' it to 99 with [realtime]. On my box, this takes 0.01 ms. I expected it to be fast, since memory access is very quick. Now, I additionally load a much more complex patch with many tilde-objects. I 'resize' the table again and it still takes only 0.01ms. Now I put a [tabread~ foo] somewhere in the patch. Now,'resize'-ing the table foo to 100 takes 20ms. Even if I remove the [tabread~ foo] again, resizing the table still takes at least 20ms. There is no way to make it fast again except restarting Pd. I also figured out that when only a non-tilde [tabread foo] is refencing the table I'm resizing, the resizing keeps being fast. Only when tilde-objects are referencing the table, resizing that very table becomes slow. The actual time seems dependent on the complexity of the loaded patch(es). And it also corresponds with the time it takes to send 'dsp 1' to pd (when dsp is switched off).
Why is resizing tables so much slower, when tilde-objects are referencing it? I noticed that even resizing very small tables can be a cause for audio drop-outs. I wonder whether 'live-resizing' should be avoided altogether.
Yeah, that's a bunch of questions... Even when knowing the answer to only some them, it might clear things up quite a bit.
Roman
[1] "Pd sorts all the tilde objects into a linear order for running." ( http://msp.ucsd.edu/Pd_documentation/x2.htm#s4.2 )
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list