Hi all,
I found this:
http://ati.cchtml.com/show_bug.cgi?id=193
I ran the test program attached to that bug report, it reported success.
But Gem (without "--with-glversion=1.5") still fails to load with glUniform2i undefined.
Then I tried some semi-random hacking of code:
#include <GL/glew.h> // just before #include <GL/gl.h> in Base/GemGL.h
and adding -lGLEW to the Make.config (iirc).
Gem now compiles without error and loads without error, but segfaults in numerous situations, including plain texturing (gdb indicates jumping to a null pointer).
The key function in the OpenGL 2.0 tester seems to be:
typedef void (*__GLXextFuncPtr)(void); extern "C" __GLXextFuncPtr glXGetProcAddressARB (const GLubyte *);
which is only referenced in:
pd-gem/Gem/src/Base/glxew.h pd-gem/Gem/src/Base/glew.cpp
I noticed glewInit() is only called on Windows:
pd-gem/Gem/src/Base/GemWinCreateNT.cpp: GLenum err = glewInit();
So I hacked something in GemWinCreateXWin.cpp:
#ifdef USE_GLEW GLenum err = glewInit(); if (GLEW_OK != err) error("failed to init GLEW"); else post("GLEW version %s",glewGetString(GLEW_VERSION)); #endif
and added #include <GL/glew.h> in GemWinCreate.h, and I got a Gem that compiled and loaded without errors, no segfaults so far either, but also no working shaders:
Direct Rendering enabled! GLEW version 1.3.4 GEM: Start rendering linking: link 1.07374e+09 0 linking: link 1.07374e+09 5.36871e+08 [glsl_program]: Info_log: [glsl_program]: Link successful. There are no attached shader objects. [pix_image]: GEM: thread loaded image: /home/claude/src/pd-gem/Gem/examples/10.glsl/img3.jpg GL: invalid value
With 02_primitive_distortion.pd I get an undistorted textured sphere, similarly the other examples (everything looks like no special shaders are running).
The large numbers for the shader ids look a bit suspicious to me, as does the "no attached shader objects". Would be useful if "GL: invalid value" could be made more verbose, too.
So, any clues?
Some more info from Pd:
[glsl_vertex]: Vertex_shader Hardware Info [glsl_vertex]: ============================ [glsl_vertex]: MAX_VERTEX_ATTRIBS: 32 [glsl_vertex]: MAX_VERTEX_UNIFORM_COMPONENTS_ARB: 4096 [glsl_vertex]: MAX_VARYING_FLOATS: 44 [glsl_vertex]: MAX_COMBINED_TEXTURE_IMAGE_UNITS: 16 [glsl_vertex]: MAX_VERTEX_TEXTURE_IMAGE_UNITS: 0 [glsl_vertex]: MAX_TEXTURE_IMAGE_UNITS: 16 [glsl_vertex]: MAX_TEXTURE_COORDS: 8 [glsl_fragment]: glsl_fragment Hardware Info [glsl_fragment]: ============================ [glsl_fragment]: MAX_FRAGMENT_UNIFORM_COMPONENTS: 4096 [glsl_fragment]: MAX_TEXTURE_COORDS: 8 [glsl_fragment]: MAX_TEXTURE_IMAGE_UNITS: 16 [glsl_program]: glsl_Program Hardware Info [glsl_program]: ============================ [glsl_program]:
I'm running Debian Stable with ATi proprietary fglrx driver for a Mobility Radeon 9700 card.
Thanks,
Claude
Claude Heiland-Allen a écrit :
Hi all,
I found this:
http://ati.cchtml.com/show_bug.cgi?id=193
I ran the test program attached to that bug report, it reported success.
But Gem (without "--with-glversion=1.5") still fails to load with glUniform2i undefined.
Then I tried some semi-random hacking of code:
#include <GL/glew.h> // just before #include <GL/gl.h> in Base/GemGL.h
and adding -lGLEW to the Make.config (iirc).
Gem now compiles without error and loads without error, but segfaults in numerous situations, including plain texturing (gdb indicates jumping to a null pointer).
The key function in the OpenGL 2.0 tester seems to be:
typedef void (*__GLXextFuncPtr)(void); extern "C" __GLXextFuncPtr glXGetProcAddressARB (const GLubyte *);
which is only referenced in:
pd-gem/Gem/src/Base/glxew.h pd-gem/Gem/src/Base/glew.cpp
I noticed glewInit() is only called on Windows:
pd-gem/Gem/src/Base/GemWinCreateNT.cpp: GLenum err = glewInit();
So I hacked something in GemWinCreateXWin.cpp:
#ifdef USE_GLEW GLenum err = glewInit(); if (GLEW_OK != err) error("failed to init GLEW"); else post("GLEW version %s",glewGetString(GLEW_VERSION)); #endif
and added #include <GL/glew.h> in GemWinCreate.h, and I got a Gem that compiled and loaded without errors, no segfaults so far either, but also no working shaders:
Direct Rendering enabled! GLEW version 1.3.4 GEM: Start rendering linking: link 1.07374e+09 0 linking: link 1.07374e+09 5.36871e+08 [glsl_program]: Info_log: [glsl_program]: Link successful. There are no attached shader objects. [pix_image]: GEM: thread loaded image: /home/claude/src/pd-gem/Gem/examples/10.glsl/img3.jpg GL: invalid value
With 02_primitive_distortion.pd I get an undistorted textured sphere, similarly the other examples (everything looks like no special shaders are running).
The large numbers for the shader ids look a bit suspicious to me, as does the "no attached shader objects". Would be useful if "GL: invalid value" could be made more verbose, too.
So, any clues?
i remember a problem i had once with long shader id. the Id was to long, pd converted it to exponentian notation (1234e+7), so some precision was lost. linking was not possibile. it was also on linux + fglrx driver. i did not use this computer for long time, so i made a very quick and very dirty workaround : i changed gem code to split the number in 2 diferents number, and then linkng was possible. i don't know a good solution for this problem.
i don't know if you've got the same problem, but hope it could help
cyrille
Some more info from Pd:
[glsl_vertex]: Vertex_shader Hardware Info [glsl_vertex]: ============================ [glsl_vertex]: MAX_VERTEX_ATTRIBS: 32 [glsl_vertex]: MAX_VERTEX_UNIFORM_COMPONENTS_ARB: 4096 [glsl_vertex]: MAX_VARYING_FLOATS: 44 [glsl_vertex]: MAX_COMBINED_TEXTURE_IMAGE_UNITS: 16 [glsl_vertex]: MAX_VERTEX_TEXTURE_IMAGE_UNITS: 0 [glsl_vertex]: MAX_TEXTURE_IMAGE_UNITS: 16 [glsl_vertex]: MAX_TEXTURE_COORDS: 8 [glsl_fragment]: glsl_fragment Hardware Info [glsl_fragment]: ============================ [glsl_fragment]: MAX_FRAGMENT_UNIFORM_COMPONENTS: 4096 [glsl_fragment]: MAX_TEXTURE_COORDS: 8 [glsl_fragment]: MAX_TEXTURE_IMAGE_UNITS: 16 [glsl_program]: glsl_Program Hardware Info [glsl_program]: ============================ [glsl_program]:
I'm running Debian Stable with ATi proprietary fglrx driver for a Mobility Radeon 9700 card.
Thanks,
Claude
On Mon, 7 Jan 2008, cyrille henry wrote:
i remember a problem i had once with long shader id. the Id was to long, pd converted it to exponentian notation (1234e+7), so some precision was lost.
Until the ID gets over 16777216, it is ok, even though Pd will misprint it if it's over 1000000. A float is always in exponential notation, it's just that it gets converted from binary exponential to decimal exponential, and there is some loss here because of the defaults. It's possible to force C to print the two extra digits, but then it'll be "too precise": might print 0.1 as 0.1000001, or only sometimes (e.g. when you do 1-0.9 but not just 0.1).
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada
Claude Heiland-Allen wrote:
Hi all,
The large numbers for the shader ids look a bit suspicious to me, as does the "no attached shader objects". Would be useful if "GL: invalid value" could be made more verbose, too.
could you re-check (with current CVS) whether the shaders now work? the ID's will/might still be large numbers, but it should work nevertheless though...
amsdr. IOhannes
IOhannes m zmoelnig wrote:
Claude Heiland-Allen wrote:
Hi all,
The large numbers for the shader ids look a bit suspicious to me, as does the "no attached shader objects". Would be useful if "GL: invalid value" could be made more verbose, too.
could you re-check (with current CVS) whether the shaders now work? the ID's will/might still be large numbers, but it should work nevertheless though...
I'll check as soon as possible (early March).
Thanks,
Claude
Claude Heiland-Allen wrote:
IOhannes m zmoelnig wrote:
Claude Heiland-Allen wrote:
Hi all, The large numbers for the shader ids look a bit suspicious to me, as does the "no attached shader objects". Would be useful if "GL: invalid value" could be made more verbose, too.
could you re-check (with current CVS) whether the shaders now work? the ID's will/might still be large numbers, but it should work nevertheless though...
I'll check as soon as possible (early March).
so now that we have tested and succeeded let's move on:
i have put more code on the way to glew into Gem proper, so you shouldn't need to do this manually any more.
just specify "--enable-glew" at configure-time to turn it on. you don't need to have glew installed on your machine.
so speaking of glew, how should we proceed here? right now, glew is a compile-time option, which adds another layer of #ifdef's which in turn is really ugly: glew is supposed to make code more readable, not less.
i would suggest to just do all builds of Gem with glew linked in, and gradually add the runtime checks as someone stumbles across them.
until all of the problematic objects have proper tests, these objects might eventually crash pd (when calling null-pointer functions), but at least you can load Gem - as opposed to the well known glUniform4i refusal to load Gem and weird maximum openGL-version compile-time hacks.
what do you think?
fgmasdr IOhannes
GLEW is going to create a mess for a while particularly for texturing. The only way to proceed is to use only GLEW and get rid of all of the current #ifdef lines. If you are going to do it then go ahead knowing that building GEM from CVS might be broken for a bit. I doubt many people are relying on the most up to date builds though.
The one bad thing about GLEW is that it requires not only a context, but a drawable window in place in order to work. This means some of the checks currently done in GemMan or in constructors will have to be moved to other places. Most likely doing a check in startRendering() and setting a flag would work.
On Mon, Mar 3, 2008 at 6:46 AM, IOhannes m zmoelnig zmoelnig@iem.at wrote:
so speaking of glew, how should we proceed here? right now, glew is a compile-time option, which adds another layer of #ifdef's which in turn is really ugly: glew is supposed to make code more readable, not less.
i would suggest to just do all builds of Gem with glew linked in, and gradually add the runtime checks as someone stumbles across them.
until all of the problematic objects have proper tests, these objects might eventually crash pd (when calling null-pointer functions), but at least you can load Gem - as opposed to the well known glUniform4i refusal to load Gem and weird maximum openGL-version compile-time hacks.
what do you think?
fgmasdr IOhannes
GEM-dev mailing list GEM-dev@iem.at http://lists.puredata.info/listinfo/gem-dev
chris clepper wrote:
GLEW is going to create a mess for a while particularly for texturing. The only way to proceed is to use only GLEW and get rid of all of the current #ifdef lines.
yes that was my concern. nevertheless i still think that on the long run we will have to go glew (or similar) - as you might probably know as your w32-builds use it too :-)
If you are going to do it then go ahead knowing that building GEM from CVS might be broken for a bit. I doubt many people are relying on the most up to date builds though.
right. otoh especially [pix_texture] already uses some runtime checks already, so it might even work here (unlike other objects, like the shader-stuff that might just plainly crash)
The one bad thing about GLEW is that it requires not only a context, but a drawable window in place in order to work. This means some of the checks currently done in GemMan or in constructors will have to be moved to other places. Most likely doing a check in startRendering() and setting a flag would work.
the first is not a big deal, as the checks are already done right after the window creation. for the 2nd, i have just added a new "callback" member "isRunnable()" to the GemBase which is called just before startRendering() and which will disable the object's "render" calls if it returns FALSE (the default being TRUE).
so objects can now execute a runtime check (with a valid context & window) to determine, whether they can actually be run.
so i guess we basically all agree.
fmadsr IOhannes