Hi,
The subject resembles very much that of one recent message of mine but it is NOT the same.
I think it's better to start a new thread because the previous one was erroneously focused on rectangle textures: I was suggested to use them to solve a problem I mentioned, and though I do appreciate very much the suggestion, it actually pointed me to the wrong direction.
So, I have learned that when you use an image as a texture (be it loaded from file, generated by camera input or by a pixsnap), the texture internally has power-of-two dimensions greater than or equal to the actual dimensions of the image, the rest being padded with black or something.
Then, I've seen that even if I use a shader, I usually don't have to worry about that IF the [pix_image] (or [pix_video] or whatever) that generates the image is in the same "line" (of left-inlet connections) as the [pix_texture] and the [glsl_program]. That is, when in the shader I use gl_TexCoord[0].st to sample the texture, I get the right coordinates.
So, the "01 simple texture" patch in the example folder works fine with non-power-of-two-sized images with no need to be changed, and without using rectangular texture mode.
However, as the attached patch shows, this is no more true if I use a uniform variable to tell the shader to use a given texture unit N, which correspond to some [pix_texture] object which is connected to another separate [gemhead] and which has been sent a [texunit N( message.
So what I understand is that [pix_image] (or any object that creates a pix and translates it into a texture) takes care of it by doing some kind of "OpenGL magic" so that the texture coordinates being "passed" to the shader are OK with no need to take care of that in the shader code.
The aritmetical part of this "magic" is indeed as simple as computing a scale factor of W/Nw for width and H/Nh for height, where W and H are width and height, and Nw and Nh are the next power of two equal to or greater than W and H respectively.
The question is, what's the best way (if there is any) to "reproduce" the magic that the pix objects do, in order to "pre-compute" this scale factor and have the shader "receive" correctly 'normalized' coordinates?
I mean, I can calculate the scaling factor and pass it through a uniform variable to the shader, and have the shader use it to rescale the coordinates. This will work fine, but it seems (to me) that [pix_image] and other pix objects are able to do this kind of rescaling somehow _before_ the shader gets in action, since you don't need to change the shader code if such an object (which is generating the image) is present in the chain.
I hope I have explained the question clearly. The attached patch shows the problem quite well I think. I don't attach the images to avoid flooding mailboxes; any non-power-of-two different sized images with those names will do; by the way I have attached them in a recent message of mine. Compare the behaviour to that of example 01 of the GLSL example folder: even with a non-pot-sized image, that example works fine.
Note that no rectangular textures are involved. Rectangular textures won't help (indeed I think it gets more complicated): I have the same problem because I need to "tell" the shader the actual size of the texture it has to use.
Thanks a lot in advance m.
P.S. though I can't promise it, it's almost sure that I will post and share the final patches, for what it's worth; but I can't do it yet.