-----Original Message----- From: g|nter geiger [mailto:geiger@xdv.org] Sent: Saturday, 15 June 2002 12:49 AM
On Fri, 14 Jun 2002, Daniel Heckenberg wrote:
To solve this I added a stoprender function to the render and postrender functions that are kept in the DAG list. Now a per-input stoprender function can simply clear that input's cache and if/when the object is deleted the unbroken DAGs can be properly broken. There may well be a better/more elegant way to do this, but this seems to fit with
the present
code.
The other obvious alternative is to pass the message down the
PD connections
(as happens when the DAG is constructed).
I am not yet really into the source, but a feeling tells me that the message passing solution might be more the way it should be. The "render chain" or dag (what does this stand for ?) commands should probably really only do the rendering... have to look at it further, maybe Mark or Johannes can tell you more about this.
Yup. It would seem to be easier and more elegant to do it that way: e.g. use the same message that creates the chain but with a NULL cache pointer... but as GEM doesn't know about PD connection making or breaking, I think that the connections could change between when the render chain is made and when the chain is destroyed. I haven't looked into this in detail but it seems more robust and straightforward to use the existing chain itself to do the stoprender.
dag = directed acyclic graph (from memory).
Looking through the per-input code above, the obvious question
is "how to
make this easily extend to n-input objects?". The motivation for n-input objects seems pretty clear: e.g doing n layer compositing would be much neater and faster that way.
ah, forget about the point I was making in my last mail about that, there are probably hundreds of thinkable effects that just might use three image or more as an input.
making image processing work in a similar way that signal processing is done in pd would be really cool. Having a basic set of objects to build homegrown video effects ...
Yes - it would be great.
There are a couple of related projects (of which you're probably aware) for video in jMax: GridFlow and VideoDSP. If/when I get a chance I'll look at some of the image processing objects for jMax to see if there's any chance to make cross-development easy between those objects and the GEM pix_s. It would be good to share objects across the different systems and share the task of optimising things.
Daniel