On Thu, 2022-02-17 at 19:30 +0000, Claude Heiland-Allen wrote:
On 17/02/2022 17:59, Roman Haefeli wrote:
the gradients between the pixels shows edges that look like low bit depth (and probably are due low bit depth).
No clue about high bit depth output. Possible workaround: a shader that does dithering could help mask the problem,
Oh, good idea. I didn't think of that.
that is if the OpenGL texture interpolation is not the source of the problem (hopefully it's done with floats, if not maybe you can do interpolation in the shader too after reading the texels without interpolation). Check the OpenGL specification for GL_LINEAR magnification filter details, maybe it says how much precision is guaranteed.
My impression is that the OpenGL side is all 32bit float. I tried 'quality 1' to [pix_texture] which does (from what I see) linear interpolation. And I also tried bicubic interpolation with a shader written by Cyrille Henry from 2007. The shader code is using type vec4 internally and GLSL spec says that this is 32bit float [1].
Actually, since I'm already using a shader, I could try to add some noise there. Not totally sure how this should be done, though.
One thing you could do to diagnose is check pixel values of neighbouring bands to see if they are off by one (in which case suspect needing higher bit depth output) or more (in which case suspect OpenGL GL_LINEAR precision being insufficient).
Ok. I'll try to measure this.
Thanks for your input, Roman