hi all
i was wondering if it is possible (and how) to change the rendering context in gem, with respect to interfacing it to pdp (or something else) without gem using it's own display window.
what i am interested in is having gem render to an offscreen area (pbuffer) and output this as a pdp packet.
it is possible to use pbuffers and disable the window context entirely, but it seems it requires rather a large cut in gem. maybe i'm not seeing this correctly but i think it would involve something like this:
* GemWinCreateXWin needs to be replaced by something that constructs a pbuffer instead of a window.
* the place where the buffers are swapped: glutSwapBuffers() in GemMan.cpp a glReadPixels should be inserted that converts the data in the pbuffer to a pdp packet.
* the glXMakeCurrent(constInfo.dpy, constInfo.win, constInfo.context) in GemMan.cpp should be replaced by a glXMakeContextCurrent(dpy, pbuffer, pbuffer, context_pbuffer) call.
anyone any thoughts on this?
tom
hi tom, list,
On 5/3/03 4:43 PM, "Tom Schouten" doelie@zzz.kotnet.org wrote:
i was wondering if it is possible (and how) to change the rendering context in gem, with respect to interfacing it to pdp (or something else) without gem using it's own display window.
what i am interested in is having gem render to an offscreen area (pbuffer) and output this as a pdp packet.
Multiple rendering contexts and pbuffer support would be very handy features in GEM. This has been on the to-do list for a long time, I think, so perhaps the momentum of the pdp project can make it happen!
it is possible to use pbuffers and disable the window context entirely, but it seems it requires rather a large cut in gem. maybe i'm not seeing this correctly but i think it would involve something like this:
- GemWinCreateXWin needs to be replaced by something that constructs a
pbuffer instead of a window.
- the place where the buffers are swapped: glutSwapBuffers() in GemMan.cpp
a glReadPixels should be inserted that converts the data in the pbuffer to a pdp packet.
- the glXMakeCurrent(constInfo.dpy, constInfo.win, constInfo.context) in
GemMan.cpp should be replaced by a glXMakeContextCurrent(dpy, pbuffer, pbuffer, context_pbuffer) call.
Should we try to get a system that can easily get a pbuffer back into a pix_ object and then just use the pdp<->gem bridge?
How about this: - you can name a rendering context in each gemhead and have that rendering chain render to the context (be it a window, pbuffer or whatever).
-Each gemwin can also be named.
-A new gembuffer object which manages pbuffer rendering and takes a name argument also...?. It outputs a bitmap in pix_ compatible form that may be connected to pix_ object or the pdp bridge.
Daniel
Zitiere Daniel Heckenberg daniel@bogusfront.org:
How about this:
- you can name a rendering context in each gemhead and have that
rendering chain render to the context (be it a window, pbuffer or whatever).
-Each gemwin can also be named.
hi daniel, et al.
my plan (which might be influenced too much by other 3d-rendering software) was rather not make completely independent rendering-chains (by naming them and connecting them via the name to a gemwin) but use the [gemhead]s rendering- chains globally connected to multiple [gemwin]s. The [gemwin]s could be controlled independently with respect to camera/viewpoint, bg-color, size, but also offscreen-rendering. This is really heavily influenced by the "camera"-idea of other software.
but on the other hand it is a lot of work to be done
-A new gembuffer object which manages pbuffer rendering and takes a name argument also...?. It outputs a bitmap in pix_ compatible form that may be connected to pix_ object or the pdp bridge.
there is this [pix_snap]-object that does exactly this.
thinking out loud: [gemwinOFF] (like offscreen) should have an outlet for imageStruct-data (used by but not compatible with pix_ -- since we don't need all the cache and newimage-overhead)
mfg.a.srd IOhannes
PS: i'm still not sure, whether it's a good idea, to have rendering-sinks (like pix_write) directly in the rendering-chain - but i guess it's not so good, although it is more flexible.
From: zmoelnig@iem.at
Zitiere Daniel Heckenberg daniel@bogusfront.org:
How about this:
- you can name a rendering context in each gemhead and have that
rendering chain render to the context (be it a window, pbuffer or whatever).
-Each gemwin can also be named.
hi daniel, et al.
my plan (which might be influenced too much by other 3d-rendering
software) was
rather not make completely independent rendering-chains (by naming them
and
connecting them via the name to a gemwin) but use the [gemhead]s
rendering-
chains globally connected to multiple [gemwin]s. The [gemwin]s could be controlled independently with respect to camera/viewpoint, bg-color, size, but also offscreen-rendering. This is really heavily influenced by the "camera"-idea of other software.
but on the other hand it is a lot of work to be done
hmm. this would be quite nice, certainly.
but if i understand correctly, if you actually wanted independent content on different displays then you'd have to carefully physically separate the objects in the scene?
also, in openGL you would actually need to send the geometry to each rendering context independently anyway, no?
perhaps we could get the best of both worlds with the default (unnamed) gemheads rendering to every gemwin or gemwinOFF... and others having a list of rendering contexts to which they will render?
-A new gembuffer object which manages pbuffer rendering and takes a name argument also...?. It outputs a bitmap in pix_ compatible form that may be connected to pix_ object or the pdp bridge.
there is this [pix_snap]-object that does exactly this.
thinking out loud: [gemwinOFF] (like offscreen) should have an outlet for imageStruct-data (used by but not compatible with pix_ -- since we don't
need
all the cache and newimage-overhead)
yup, I was thinking of something like "gemwinOFF" when I said gembuffer.
pix_snap does do what tom needs to do for screen buffers. however it is hideously slow, at least on my hardware.
we need to ensure that wherever possible, fast paths are allowed or provided. ie once you've rendered to an offscreen context which is a pbuffer, you can use that as a texture straight away without extracting the pixels and reloading it as a texture. (that reminds me: i have a pix_snap2tex object that does this in a pix_snap kind of way and runs 100 times faster on my box)
i suspect that on most hardware, any method for getting rendered output back into main memory will be slow... but we should certainly support it. anyone using mesa or an sgi would not see any speed problems. actually, i've read that os x has good, fast readback support. is this true, mac people?
daniel
yup, I was thinking of something like "gemwinOFF" when I said gembuffer.
pix_snap does do what tom needs to do for screen buffers. however it is hideously slow, at least on my hardware.
same here (gforce4) this seems to be what is going on with gem2pdp too..
we need to ensure that wherever possible, fast paths are allowed or provided. ie once you've rendered to an offscreen context which is a pbuffer, you can use that as a texture straight away without extracting the pixels and reloading it as a texture. (that reminds me: i have a pix_snap2tex object that does this in a pix_snap kind of way and runs 100 times faster on my box)
i did some experiments and pbuf<->texture conversion too and it is very fast indeed. i am looking into the possibility to add ogl texture and pbuf support to pdp. if this works out and if gem could export/import textures or pbufs we could have a very fast connection between both.
On Wednesday 05 March 2003 09:17, zmoelnig@iem.at wrote:
Zitiere Daniel Heckenberg daniel@bogusfront.org:
How about this:
- you can name a rendering context in each gemhead and have that
rendering chain render to the context (be it a window, pbuffer or whatever).
-Each gemwin can also be named.
hi daniel, et al.
my plan (which might be influenced too much by other 3d-rendering software) was rather not make completely independent rendering-chains (by naming them and connecting them via the name to a gemwin) but use the [gemhead]s rendering- chains globally connected to multiple [gemwin]s. The [gemwin]s could be controlled independently with respect to camera/viewpoint, bg-color, size, but also offscreen-rendering. This is really heavily influenced by the "camera"-idea of other software.
but on the other hand it is a lot of work to be done
hi daniel, iohannes
i decided to do some more experiments with opengl stuff on top of pdp, and this seems to work rather well. it is all centered around render context packets being passed around. if you are interested you can have a look at the opengl/ folder in the pdp package. maybe gem could benifit from this, dunno..
http://zwizwa.fartit.com/pd/pdp/test/pdp-0.11-test-6.tar.gz
it requires glx 1.3 though for pbuffer support. (the only things on linux that have this that i know of are the 41.xx nvidia drivers and mesa 5.0)
right now all the rendering is to a pbuffer. there is a 3dp_context object that provides a context and all 3dp_ objects draw to/manipulate this context.
on output, the contents of the buffer is dumped into a texture and displayed. i chose this approach to have an easy multiple stage rendering chain, where a pbuf can be dumped into a texture and reused in another pbuf rendering, etc.. this also allows to set the window dimensions independently from the render buffer dimensions.
i also tried multiple camera views in two different windows, which works if you propagate 2 different contexts trough a single rendering chain, and route trough a different modelview transform chain in front of the main chain and to a different window after the chain. (check the patches in test/).
one note: i have the impression that the rendering context switching (between different pbufs and window for example) is a rather expensive operation. i haven't nailed it down yet, but something is causing a lot of extra cycles..
two note: i don't think i understood context sharing very well. it seems you need to explicitly share every pbuf with every other to get them to see each other (for copying). now everything is shared from a single mother scratch pbuf.
mvg tom
Hi Tom, list,
This looks like great work. I was hoping to have a look at some multiple rendering context stuff this weekend so I'll let you know how that goes.
An open question: does SDL encapsulate enough glx/wgl/mac os features to support pbuffers and multiple rendering contexts in a platform independent way?
Daniel
----- Original Message ----- From: "Tom Schouten" doelie@zzz.kotnet.org To: zmoelnig@iem.at; "Daniel Heckenberg" daniel@bogusfront.org Cc: pd-dev@iem.kug.ac.at Sent: Thursday, March 13, 2003 9:16 PM Subject: Re: [PD-dev] [GEM] rendering context (gem2pdp)
On Wednesday 05 March 2003 09:17, zmoelnig@iem.at wrote:
Zitiere Daniel Heckenberg daniel@bogusfront.org:
How about this:
- you can name a rendering context in each gemhead and have that
rendering chain render to the context (be it a window, pbuffer or whatever).
-Each gemwin can also be named.
hi daniel, et al.
my plan (which might be influenced too much by other 3d-rendering
software)
was rather not make completely independent rendering-chains (by naming
them
and connecting them via the name to a gemwin) but use the [gemhead]s rendering- chains globally connected to multiple [gemwin]s. The [gemwin]s could be controlled independently with respect to camera/viewpoint, bg-color, size, but also offscreen-rendering. This is really heavily influenced by the "camera"-idea of other
software.
but on the other hand it is a lot of work to be done
hi daniel, iohannes
i decided to do some more experiments with opengl stuff on top of pdp, and this seems to work rather well. it is all centered around render context packets being passed around. if you are interested you can have a look at
the
opengl/ folder in the pdp package. maybe gem could benifit from this,
dunno..
http://zwizwa.fartit.com/pd/pdp/test/pdp-0.11-test-6.tar.gz
it requires glx 1.3 though for pbuffer support. (the only things on linux that have this that i know of are the 41.xx nvidia drivers and mesa 5.0)
right now all the rendering is to a pbuffer. there is a 3dp_context object that provides a context and all 3dp_ objects draw to/manipulate this
context.
on output, the contents of the buffer is dumped into a texture and
displayed.
i chose this approach to have an easy multiple stage rendering chain,
where a
pbuf can be dumped into a texture and reused in another pbuf rendering,
etc..
this also allows to set the window dimensions independently from the
render
buffer dimensions.
i also tried multiple camera views in two different windows, which works
if
you propagate 2 different contexts trough a single rendering chain, and
route
trough a different modelview transform chain in front of the main chain
and
to a different window after the chain. (check the patches in test/).
one note: i have the impression that the rendering context switching
(between
different pbufs and window for example) is a rather expensive operation. i haven't nailed it down yet, but something is causing a lot of extra
cycles..
two note: i don't think i understood context sharing very well. it seems
you
need to explicitly share every pbuf with every other to get them to see
each
other (for copying). now everything is shared from a single mother scratch pbuf.
mvg tom
An open question: does SDL encapsulate enough glx/wgl/mac os features to support pbuffers and multiple rendering contexts in a platform independent way?
an nvidia engineer told me pbufs don't work on osx yet. he sent me these links for a linux/windows pbuffer abstraction:
http://cvs1.nvidia.com/DEMOS/OpenGL/inc/shared/pbuffer.h http://cvs1.nvidia.com/DEMOS/OpenGL/src/shared/pbuffer.cpp http://cvs1.nvidia.com/DEMOS/OpenGL/src/simple_pbuffer/
it seems the platform dependent things can be tucked away nicely, likewize for ordinary rendering contexts (windows). wgl also supports direct render to texture, without the pbuffer intermediate. as for sdl, i really don't know. i do know that there are some issues with sdl on osx you should be aware of, namely it requires the entry point of your program to be wrapped in objC code, which would require a serious workaround or a patch to pd.
tom
An open question: does SDL encapsulate enough glx/wgl/mac os features to support pbuffers and multiple rendering contexts in a platform independent way?
an nvidia engineer told me pbufs don't work on osx yet. he sent me these links for a linux/windows pbuffer abstraction:
OSX can render from an offscreen buffer to a texture. here's some sample code: http://developer.apple.com/samplecode/Sample_Code/Graphics_3D/AGLSurfaceText...
if pdp and gem both have aglcontexts then this method will work. also, it works on both ATI and Nvidia hardware and non-power-of-two buffer and texture sizes.
cgc
it seems the platform dependent things can be tucked away nicely, likewize for ordinary rendering contexts (windows). wgl also supports direct render to texture, without the pbuffer intermediate. as for sdl, i really don't know. i do know that there are some issues with sdl on osx you should be aware of, namely it requires the entry point of your program to be wrapped in objC code, which would require a serious workaround or a patch to pd.
tom
PD-dev mailing list PD-dev@iem.kug.ac.at http://iem.kug.ac.at/cgi-bin/mailman/listinfo/pd-dev
On Friday 14 March 2003 08:42, chris clepper wrote:
An open question: does SDL encapsulate enough glx/wgl/mac os features to
support pbuffers and multiple rendering contexts in a platform independent way?
an nvidia engineer told me pbufs don't work on osx yet. he sent me these links for a linux/windows pbuffer abstraction:
OSX can render from an offscreen buffer to a texture. here's some sample code: http://developer.apple.com/samplecode/Sample_Code/Graphics_3D/AGLSurfaceTex ture.htm
thanks chris. no more reasons for writing platform dependent stuff then,, <cough>
:)
I've just committed pix_videoDS to the CVS repository and posted an updated binary build of the GEM CVS source at the usual place: http://www.bogusfront.org
Note that the name has changed from pix_video_ds to pix_videoDS for consistency with other video objects and sourcefiles in GEM.
Daniel
After a bit of digging into the word of Device contexts (DCs - Windows Drawing surface more/less) and Rendering contexts (RCs - openGL) this seems to be the state of play (on Windows at least):
- textures can't be shared between rendering contexts - display lists can be shared between rendering contexts - a single DC can have multiple RCs (with only one active at a time) - Multiple DCs can be used with a single RC as long as they have the same pixel format (only one DC, RC pair active at a time) - RC switches are slow - DC switches are fast - There are pixel formats (and hence DCs) which support accelerated, double buffered, openGL display and pbuffer output (at least on nVidia hardware)
So hopefully we should be able to use a single RC for all rendering in GEM, with a DC for each output window or pbuffer. This should allow display lists and textures to be used on any output window without duplication.
Of course, there are reasons why you might not like to do this - pbuffer formats different to the display format could be useful.
Is the above all true/make sense? What's the situtation with agl and glx?
Daniel
On 14/3/03 8:39 AM, "Daniel Heckenberg" daniel@bogusfront.org wrote:
Hi Tom, list,
This looks like great work. I was hoping to have a look at some multiple rendering context stuff this weekend so I'll let you know how that goes.
An open question: does SDL encapsulate enough glx/wgl/mac os features to support pbuffers and multiple rendering contexts in a platform independent way?
Daniel
----- Original Message ----- From: "Tom Schouten" doelie@zzz.kotnet.org To: zmoelnig@iem.at; "Daniel Heckenberg" daniel@bogusfront.org Cc: pd-dev@iem.kug.ac.at Sent: Thursday, March 13, 2003 9:16 PM Subject: Re: [PD-dev] [GEM] rendering context (gem2pdp)
On Wednesday 05 March 2003 09:17, zmoelnig@iem.at wrote:
Zitiere Daniel Heckenberg daniel@bogusfront.org:
How about this:
- you can name a rendering context in each gemhead and have that
rendering chain render to the context (be it a window, pbuffer or whatever).
-Each gemwin can also be named.
hi daniel, et al.
my plan (which might be influenced too much by other 3d-rendering
software)
was rather not make completely independent rendering-chains (by naming
them
and connecting them via the name to a gemwin) but use the [gemhead]s rendering- chains globally connected to multiple [gemwin]s. The [gemwin]s could be controlled independently with respect to camera/viewpoint, bg-color, size, but also offscreen-rendering. This is really heavily influenced by the "camera"-idea of other
software.
but on the other hand it is a lot of work to be done
hi daniel, iohannes
i decided to do some more experiments with opengl stuff on top of pdp, and this seems to work rather well. it is all centered around render context packets being passed around. if you are interested you can have a look at
the
opengl/ folder in the pdp package. maybe gem could benifit from this,
dunno..
http://zwizwa.fartit.com/pd/pdp/test/pdp-0.11-test-6.tar.gz
it requires glx 1.3 though for pbuffer support. (the only things on linux that have this that i know of are the 41.xx nvidia drivers and mesa 5.0)
right now all the rendering is to a pbuffer. there is a 3dp_context object that provides a context and all 3dp_ objects draw to/manipulate this
context.
on output, the contents of the buffer is dumped into a texture and
displayed.
i chose this approach to have an easy multiple stage rendering chain,
where a
pbuf can be dumped into a texture and reused in another pbuf rendering,
etc..
this also allows to set the window dimensions independently from the
render
buffer dimensions.
i also tried multiple camera views in two different windows, which works
if
you propagate 2 different contexts trough a single rendering chain, and
route
trough a different modelview transform chain in front of the main chain
and
to a different window after the chain. (check the patches in test/).
one note: i have the impression that the rendering context switching
(between
different pbufs and window for example) is a rather expensive operation. i haven't nailed it down yet, but something is causing a lot of extra
cycles..
two note: i don't think i understood context sharing very well. it seems
you
need to explicitly share every pbuf with every other to get them to see
each
other (for copying). now everything is shared from a single mother scratch pbuf.
mvg tom
PD-dev mailing list PD-dev@iem.kug.ac.at http://iem.kug.ac.at/cgi-bin/mailman/listinfo/pd-dev
On Monday 17 March 2003 00:10, Daniel Heckenberg wrote:
After a bit of digging into the word of Device contexts (DCs - Windows Drawing surface more/less) and Rendering contexts (RCs - openGL) this seems to be the state of play (on Windows at least):
- textures can't be shared between rendering contexts
- display lists can be shared between rendering contexts
- a single DC can have multiple RCs (with only one active at a time)
- Multiple DCs can be used with a single RC as long as they have the same
pixel format (only one DC, RC pair active at a time)
- RC switches are slow
- DC switches are fast
- There are pixel formats (and hence DCs) which support accelerated, double
buffered, openGL display and pbuffer output (at least on nVidia hardware)
So hopefully we should be able to use a single RC for all rendering in GEM, with a DC for each output window or pbuffer. This should allow display lists and textures to be used on any output window without duplication.
Of course, there are reasons why you might not like to do this - pbuffer formats different to the display format could be useful.
Is the above all true/make sense? What's the situtation with agl and glx?
hi daniel,
it seems the same is true for glx, only that on glx textures can be shared between different contexts. i completely overlooked the fact that one RC can have several DC (this seems to be possible on glx too). this would eliminate the need for the expensive context switch entirely. good!
tom
a small remark on glx:
it seems you can't switch the drawable (window or pbuf) without switching the rendering context. the call is
Bool glXMakeCurrent( Display *dpy, GLXDrawable drawable, GLXContext ctx )
this seems to be slow even when the context is the same as current.
bummer..
tom