chris clepper wrote:
ad a) of course c-coded functions will always be faster than abstractions because of their very nature. but then, it might well be possible, that the speed-loss isn't that dramatic.
right. how about starting with the abstractions and if someone goes and optimizes them into C code then that could be available as well. one of
good (for my ego ;-)) this would produce a fast growing number of fx ("this can be done with Gem") and a slower growing number of optimized fx ("even at the same time")
the reasons i would like to have the specific coded convolution objects is they make more sense for yuv rather than a generic kernel. for example, edge detection usually focuses on luma only, so that right there is a pretty big increase in efficiency for yuv (slower for rgb
yes, i've seen that you can reduce maths a lot with yuv and a non generic-kernel. (on the other hand: as you say: for most yuv-convolutions only the chroma key is needed. it would be good/fast to have a (not-so-)generic object that does exactly this. and let's call it [yuv_convolution] and hey! here we are again - it is a doom loop
p4/2.5Ghz/winXP the other day and it took 40% cpu to process the homer.avi!! convolution is obviously something that requires a great deal of optimization for use in a real-time environment, so i'm all for doing as much as possible.
yes, true.
both the abstracted version and the higher level built in objects can exist at the same time. so there could be a patch that illustrates how [pix_convolve] can be used to do all sorts of processes. some people would want to check this patch out and figure out how convolution works, while others might not care at all. it's best to leave this as an option to the user right?
yes, i guess so. (sigh ;-))
ad c) obviously i cannot make a [pix_smooth] if there is no possibility to apply a convolution kernel.
you do have a point about not being able to make _every possible convolution possibility into it's own object, but that's why [pix_convolve] exists. i think that edge-detection and enhancement, blur, sharpen, embossing, and directional blur might cover the 'basics' of convolution. having the most common convolution processes available will let users know that they can apply these processes.
maybe that's my ignorance of image-processing: i thought, that some of these really only differ in the convolution-kernel, being not further optimizable (for instance: no or sparse zeros) so these objects would all inherit from [pix_convolve] and only set the convolution-kernel to a (scalable) constant.
and here we go:
of course some sort of documentation/tutorial needs to facilitate the learning process so people can make the jump from using specialized objects to the more general tools.