james tittle schrieb:
On Oct 18, 2005, at 12:48 PM, IOhannes m zmoelnig wrote:
james tittle wrote:
...so, I think this means we should just have one [shader_program] that can accept one or two names as arguments (vertex shader followed by fragment?), but we would then need some mechanism to determine which one is the vertex or fragment shader...so, I've been just going ahead with making it a message based object, such that you send a [vertex nameOfVertexShader< and/or [fragment nameOfFragShader< to the [shader_program]...then the shader_program will try to link together whatever it has, report what happens, and go on from there...
but does this mean that we can only have 1 fragment-object and 1 vertex-object, while GLSL would support multiples of both (the only restriction ist that there must be one and only one main() routing in both fragment and vertex sets)
so i was thinking of having 3 objects: vertex- and fragment-shader loaders ("compilers") and a linker-object.
does this make the patches unnaturally bloated ?
it would look like
| [GLSL_vertex vertex_main.glsl] | [GLSL_vertex vertex_sub1.glsl] | [GLSL_fragment fragment_main.glsl] | [GLSL] |
(the object-names just came to my mind while typing, so i don't care about them)
and both [GLSL_vertex] and [GLSL_fragment] would have one additional inlet/outlet so you could share shader-objects.
...this sounds fine, but could also just be done with one object that can be created with multiple names, like you did with fragment_program inheriting from vertex_program: the only difference in vertex/fragment shader object creation is what is passed to glCreateShaderObject(): GL_VERTEX_SHADER or GL_FRAGMENT_SHADER...chris had actually suggest the "shader_program" name, but I kinda like putting in the GLSL/glsl_whatever to make it more obviously different from the ARB program stuff, tho pedanticly glsl is "shaders" and arb is "programs" (of course a glsl shader object becomes a glsl program object when it's linked!)...
...then I agree we should have a GLSL_link/bind/program that would be like the soon-to-be-CVSed pix_multitexture, in that it'll accept shader object ID's...the thing here is that I haven't seen examples where more than one of each shader is bound together in one program (not to say that "I've seen it all")...I have seen header files for shaders that include common lists of uniform variables, so I guess these would have to be included somewhere along the line...
otoh, i am not sure what's the fuzz about all those different compiled shader-objects. one thing is that you can keep your (shading) code cleaner and re- use it (as a programmer i mean); so it wouldn't be _that_ bad, if you had to copy everything to one file before loading it into [shader_program]. the more serious question is, whether you can use more distinct complex shaders, if they share modules(==shader objects). so if 2 shaders share 50% of the code and loading both totally separately would exceed the maximum number of instructions, you might be able to load both with the share objects (only 75% (compared to the other option) has to be loaded) is this assumption correct ? should it bother us ?
...yeh, I think it's good to stop and try to get it correct the first time, which is why I didn't just do a similar set of object like the ARB stuff...
...one thing that I REALLY want to add to this (and possibly the ARB program objects) is a way to edit the programs without using an external editor, and this'll be pretty easy to do with the tcl text widget, I'd imagine...then we have a really cool system for playing with GPU programming!
jamie
ps: do your current gl drivers include support for pixel buffer objects and framebuffer objects? If so, I had a crazy idea for multiple_windows where we could just render things to framebuffer objects and then use those for our different windows...
i know shaders only from directX hlsl like they are implemented in vvvv. there you edit the shader by rightclick on a shader node in a way like james want to have it. the shader code itself contains ALL, that means paramters, vertexshaders, pixelshaders and the way how they should compiled together.
the hierarchy is like:
1.) paramters matrices, paramters from input pins, textures, sampler ...
2.) vertexshader codes (as many you want)
3.) pixelshader codes (as many you want)
4.) techniques (a many you want) here you define which vertexshader should be combined with which pixelshader like:
technique InversTextureColor { pass P0 { VertexShader = compile vs_1_1 DestroyGeometry(); PixelShader = compile ps_1_4 InverseTexturePixels(); } }
and there is an input pin to select which technique should be used.
that means everything is in one textfile and you just need one object to load and/or edit the shader code, simple and clear.
i send an example code with this mail ...
nice greets from the meso office ;)
// -------------------------------------------------------------------------------------------------- // PARAMETERS: // --------------------------------------------------------------------------------------------------
//transforms float4x4 tW: WORLD; //the models world matrix float4x4 tV: VIEW; //view matrix as set via Renderer (EX9) float4x4 tP: PROJECTION; //projection matrix as set via Renderer (EX9) float4x4 tWVP: WORLDVIEWPROJECTION;
//texture texture Tex <string uiname="Texture";>; sampler Samp = sampler_state //sampler for doing the texture-lookup { Texture = (Tex); //apply a texture to the sampler MipFilter = LINEAR; //sampler states MinFilter = LINEAR; MagFilter = LINEAR; };
//define some input pins float param1; float param2;
//texture transformation marked with semantic TEXTUREMATRIX to achieve symmetric transformations float4x4 tTex: TEXTUREMATRIX <string uiname="Texture Transform";>;
//the data structure: "vertexshader to pixelshader" //used as output data with the VS function //and as input data with the PS function struct vs2ps { float4 Pos : POSITION; float4 TexCd : TEXCOORD0; };
// -------------------------------------------------------------------------------------------------- // VERTEXSHADERS // --------------------------------------------------------------------------------------------------
vs2ps VS( float4 Pos : POSITION, float4 TexCd : TEXCOORD0) { //inititalize all fields of output struct with 0 vs2ps Out = (vs2ps)0;
//transform position Out.Pos = mul(Pos, tWVP);
//transform texturecoordinates Out.TexCd = mul(TexCd, tTex);
return Out; }
// -------------------------------------------------------------------------------------------------- // PIXELSHADERS: // --------------------------------------------------------------------------------------------------
float4 col : Color;
float4 PS(vs2ps In): COLOR { //In.TexCd = In.TexCd / In.TexCd.w; // for perpective texture projections (e.g. shadow maps) ps_2_0
float4 col = tex2D(Samp, In.TexCd);
return col; }
float4 PS2(vs2ps In): COLOR { //In.TexCd = In.TexCd / In.TexCd.w; // for perpective texture projections (e.g. shadow maps) ps_2_0
float4 col = tex2D(Samp, In.TexCd);
return col; }
// -------------------------------------------------------------------------------------------------- // TECHNIQUES: // --------------------------------------------------------------------------------------------------
technique TMyShader1 { pass P0 { //Wrap0 = U; // useful when mesh is round like a sphere VertexShader = compile vs_1_1 VS(); PixelShader = compile ps_1_0 PS(); } }
technique TMyShader2 { pass P0 { //Wrap0 = U; // useful when mesh is round like a sphere VertexShader = compile vs_1_1 VS(); PixelShader = compile ps_1_4 PS2(); } }
technique TFixedFunctionPipeline { pass P0 { //transforms WorldTransform[0] = (tW); ViewTransform = (tV); ProjectionTransform = (tP);
//texturing Sampler[0] = (Samp); TextureTransform[0] = (tTex); TexCoordIndex[0] = 0; TextureTransformFlags[0] = COUNT4 | PROJECTED; //Wrap0 = U; // useful when mesh is round like a sphere
Lighting = FALSE;
//shaders VertexShader = NULL; PixelShader = NULL; } }