On Apr 28, 2005, at 12:29 PM, IOhannes m zmoelnig wrote:
chris clepper wrote:
On Apr 28, 2005, at 2:23 AM, IOhannes m zmoelnig wrote:
in theory(!) we should be able to handle this (or most of it) by just fixing some preprocessor defines in the GemPixUtil.h and recompile the whole thing. this of course implies, that people used those defines.
It would be burning to try and do this on the fly for the codecs and input devices that use 'yuvs' instead of '2vuy'.
...so to sum up, we made a decision way back when to use 2vuy instead of yuvs, what with both being supported by the APPLE_ycbcr_422 gl extension for uploading textures...from the extension spec:
A new <format> is added, YCBCR_422_APPLE. Additionally, to handle the difference in pixel size and byte ordering for 422 video, the pixel storage operations treat YCBCR_422_APPLE as a 2 component format using the UNSIGNED_SHORT_8_8_APPLE or UNSIGNED_SHORT_8_8_REV_APPLE <type>.
The '2vuy' or k2vuyPixelFormat pixel format is an 8-bit 4:2:2 Component Y'CbCr format. Each 16 bit pixel is represented by an unsigned eight bit luminance component and two unsigned eight bit chroma components. Each pair of pixels shares a common set of chroma values. The components are ordered in memory; Cb, Y0, Cr, Y1. The luminance components have a range of [16, 235], while the chroma value has a range of [16, 240]. This is consistent with the CCIR601 spec. This format is fairly prevalent on both Mac and Win32 platforms. The equivalent Microsoft fourCC is ÔUYVYÕ. This format is supported with the UNSIGNED_SHORT_8_8_REV_APPLE type for pixel storage operations.
The 'yuvs' or kYUVSPixelFormat is an 8-bit 4:2:2 Component Y'CbCr format. Identical to the k2vuyPixelFormat except each 16 bit word has been byte swapped. This results in a component ordering of; Y0, Cb, Y1, Cr. This is most prevalent yuv 4:2:2 format on both Mac and Win32 platforms. The equivalent Microsoft fourCC is 'YUY2'. This format is supported with the UNSIGNED_SHORT_8_8_APPLE type for pixel storage operations.
...so, does this mean we could get away with just changing the pixel storage type based on what's been decoded? Of course, to do any processing would also require the ability to switch the altivec computations, like IOhannes started with the following:
what i mean is (since i keep saying "what i mean", i guess i am starting to have serious problems to express myself...) that the preprocessor defines - like "chU" or "chRed" - might not be used that much in the alitvec-code: i think the byte-ordering is somewhat hardcoded in the altivec (or of course MMX, to not be unfair) instructions as they are used in Gem.
...it may just be as easy as making a selection of permutation vectors, one to deal with yuvs, the other with 2yuv...this is very easily done in the GemPixUtil's code, but I haven't really surveyed the pix_* processing code...
Altivec giveth and also taketh away, but in this case the bad bit of QT code is something that should use Altivec and doesn't. Probably the best way to go is write an Altivec pixel packing swizzler routine from scratch. A 12-15% hit is a lot better than 40%, and although my G4 plays one 720p, this is really something for G5s only where the penalty is likely to be much smaller.
yes, i guess such a swizzler wouldn't be a very good idea. one unfortunate thing is, that currently all the colorspace-conversion routines in GemPixUtil.h imply, that the source and destination buffer are different, and it think that in-place manipulation might be faster. but after more thinking i guess that something like the following might well work in place: m_image.setCsizeByFormat(YUVS); m_image.from2VUY(m_image.data);
(just thinking aloud)
...this is very fast, as it's just a permute, and we can do something like 16 pixels per vector...
is there a way to detect whether an arbitrary movie is stored in YUVS or in 2YUV before doing the actual decoding ? so that if the slow QuickTime-conversion would be used you can take the short way to the "bad" format and either do the conversion "by hand" or upload it directly to the gfx-card without having to bother the other pix-objects.
...a longterm to-do has been to change the quicktime code to a "decompression sequence", which would give us a lot of flexibility in telling quicktime how to behave, and therefore possibly avoid the penalties of it making the wrong/slow choice...
...does anyone know of an up-to-date list of codecs and their "natural" decompression colorspaces?
l8r, jamie