On 12/03/2010 05:06 AM, Mathieu Bouchard wrote:
On Wed, 1 Dec 2010, IOhannes m zmoelnig wrote:
- [glsl_vertex] opens a shader-file, compiles it, retrieves the
shader-ID, converts it to t_float using reinterpretation cast (t_float/GLuint union) and sends it out - the shaderID shows as "0", [change] doesn't let it through, thus the shader is not linked and not run :-(
If shader-IDs don't use all the bits in the GLuint, you may be able to cast it to float the normal way... you just need to make sure it's always less than 16777216.
which unfortunately is not true. tis was the reason hy the reinterpret_cast was introduced in the first place, as soe openGL implementations (ati/radeon iirc) were using large uints for representing shader IDs.
If not, then could you fake it ? Make a table of all the existing IDs and use ID-indices all over, instead of the IDs themselves. It's a bit like we have symbols instead of strings.
this is what i've done now.
clearly this mainly a problem in debian's way to compile Gem (i guess some SSE-enable/disable/cleanup thing), though if anybody can illuminate me so i can fix it, i would be thankful...
It's easier to just drop the reinterpret_cast. Note that if this were a float being reinterpreted as an int and back, it wouldn't be a problem, because the int format has no special values.
again, in theory it worked fine (both GLuint and t_float being 32bit) the problem came from the use of [change] in virtually any patch using shaders, and the test for equality wouldn't trigger with some numbers (denormals i believe)
come to think of it, it was probably a bad idea to output the shaderID each render cycle rather than only when the shader was created. if the ID was only output once, there would be no need for a [change] at all. i'll probably add this as well.
fgmasdr IOhannes