hi guenter and the others ! .....
Am Freitag, 15. Juni 2001 21:35 schrieb guenter geiger:
On Thu, 14 Jun 2001, Michael Droettboom wrote:
TECHNICAL DETAIL: My overall architecture is to pass video data as if they were very large blocks of audio data between objects. My video_in_rgb object, for instance, outputs three data streams, one for red, green and blue.
All of this sounds great, I do have one question though. Why did you choose to implement your own data processing concept, instead of using the pix objects from gem ?
Image data is inherently different from audio data, therefore the gain you have from being able to reuse the pd signal processing objects (which are optimized for audio calculations) doesn't seem to be worth it.
Or put the other way, what was it that you didn't like about them GEM way ?
Guenter
just my thoughts about it :
were currently implementing video into jmax, and i follow a similar way. what i do is to split the 32 bit of a float into a union of 4x8 bit for rgba, called pixel_t these are "flowing around" like the normal audio data, but in a seperate chain, called vdsp.
the advantage is that you can do most of the calculations on a stream rather than on a whole picture. imagine the following situation :
one video source, some calculations and maybe 4 effects (still on one video of course....) most of the time nothing happens, but if you use whole images, then you have to do *all* calculations on a whole image, spending lot of time *momentarily*...... if you stream them, for example for additions etc, you only have to calc, lets say, 2048 pixels (together with a smaller number of audio samples....) this way you can "distribute" the computation power needed across small vectors of pixels.....and not a whole image.....
if you need calculations that work only on a whole image (like distortions or so...) you can always store one whole image in a buffer to apply calculations on it in the time.......
hope to cleared that a little ??? (if anyone has interrest, please contact.....)
greets,
chris