Sorry for the delay. I almost haven't worked on GridFlow since february.
Here's the reply.
Le 2011-02-27 à 19:35:00, Matteo Sisti Sette a écrit :
On 02/27/2011 06:43 PM, Mathieu Bouchard wrote:
Are you trying it with an image size much larger than what you really need to analyse ?
No I wan't, but I haven't really tried blob detection. I just tried some very basic image processing such as mixing two images, changing the colors, you know the basic stuff you can find in the example patches, and the CPU was already very loaded with relatively small images.
There's a problem with number types... the default number type has a lot more range than what is usually needed, and the other number types aren't so easy to use. If this were dealt with, the average GridFlow experience would be a lot faster. You can see alternate number types in several of the examples. As it is now, each GridFlow grid often takes 2 or 4 times the amount of RAM it needs. It's already optimisable since many years, but so far, you have to learn the extra syntax.
OTOH, the looser ranges means you more naturally avoid clipping your RGB space, so that you don't have to think about it. In GEM, you don't even have the option of bigger ranges (all pixel values go from 0 to 255).
I was just expressing the fact that the power of manipulating raw pixels with matrices in a patching environment such as Pd results to me "frustratingly attractive", where the frustration comes from the fact that you can't achieve enough efficiency to manipulate images of "reasonable size"
Perhaps threaded IO would help with those things : if most [#in] and [#out] plugins used threads, they could use the 2nd CPU that most people have and it would already be a relief.
But I think the limitation is mostly intrinsic in the fact of doing it in an "interpreted" environment.
Much of GridFlow is designed to be quite fast in an interpreted environment, by doing lots of work per message so that you don't need to send many messages, but it still is quite inefficient on certain things such as copying too much RAM. Much of this has to do with GridFlow not ever requiring something like [pix_separator] (the [#t] is not an equivalent of [pix_separator]).
I mean I don't think gridflow could be much faster than it is, or could it?
It probably could be much faster, yes. I just stated 3 ways in which it could.
Something that would be great would be a "pd/gridflow-like" patching environment that would compile your patch into shaders and have the GPU make the computation - but in a completely invisible way .... Do you know if something like that already exists?
There's Quartz Composer, but I wouldn't use that.
Also, GPU has some quite harsh limitations. There are things that are hard to do outside of the CPU, and generally, even for things that are doable on a GPU, so much code would have to be half-rewritten to fit on the GPU, that it takes a lot of effort and never will be fully automatic (as long as we're in the current GPU paradigm).
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC