For the pdp to GEM port of my freeframeGL host, I bump into a GEM shortcoming...
When processing a pix image in a GEM external, the image is passed to the processRGBAImage method. You can access it during the length of the function, after which I suppose, the imagestruct is automatically passed to GEM for outputting. What I want to do though, is outputting the image only when the next image is being started to process. I'll explain why: I do a call to glReadPixels, which needs quite some time to finish, but by ussing Pixel Buffer Objects, I can do this transmitting from GPU to CPU asynchronously, so I can do stuff during the wait for glReadPixels to finish, then I can copy the buffer into the output image. Best is to wait for the next image, which comes mostly more then 20ms later, so meanwhile the call can complete. GEMs design doesnt facilitate outputting an image after the end of the processing method though... Is there some method internal to GEM that I can use to directly send some data to the image outlet? Or does anyone have an idea how to wait for glReadPixels without at the same time blocking the CPU?
I can be found scrutinizing some more GEM source code...
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 2012-02-29 02:06, Gert De Roost wrote:
What I want to do though, is outputting the image only when the next image is being started to process. I'll explain why: I do a call to glReadPixels, which needs quite some time to finish, but by ussing Pixel Buffer Objects, I can do this transmitting from GPU to CPU asynchronously, so I can do stuff during the wait for glReadPixels to finish, then I can copy the buffer into the output image. Best is to wait for the next image, which comes mostly more then 20ms later, so meanwhile the call can complete. GEMs design doesnt facilitate outputting an image after the end of the processing method though... Is there some method internal to GEM that I can use to directly send some data to the image outlet? Or does anyone have an idea how to wait for glReadPixels without at the same time blocking the CPU?
i think this is the wrong approach for Gem. FFGL does processing on the GPU. this is what Gem is all about. the pix_... stuff (in CPU-space) is only a small subset of Gem. a good integration into Gem would therefore be, to allow Gem to do all the CPU<->GPU transfers (using [pix_texture], [gemframebuffer], [pix_snap2tex] on the one side and [pix_snap] on the other side) and make FFGL only work on the textures itself. this allows for both optimizing the transfers independently of FFGL (and have other uses profit from those optimizations) while at the same time allowing for more different uses (e.g. if you use [gemframebuffer] to generate your texture you want to apply an FFGL effect on, there is really no need to transfer the data from GPU to CPU, then transfer it back to GPU to do the FFGL processing and transfer it back to CPU, only to finally transfer the data from CPU to GPU again to display the image).
and both [pix_texture] and [pix_snap] allow for asynchronous DMA-transfers already (though i only added PBO-tarnsfers to [pix_snap] a week ago or so). it's not really documented anywhere (yet), but you can send a [pbo $1( message to both these objects to specify the number of PBOs to use (with "0" being the default behaviour)
fgmadr IOhannes
Le 29/02/2012 09:34, IOhannes m zmoelnig a écrit :
and both [pix_texture] and [pix_snap] allow for asynchronous DMA-transfers already (though i only added PBO-tarnsfers to [pix_snap] a week ago or so). it's not really documented anywhere (yet), but you can send a [pbo $1( message to both these objects to specify the number of PBOs to use (with "0" being the default behaviour)
Do you mean that since last week pix_snap should be lot's faster than it use to be?
i'd like to understand a bit more the use of single / multiple PBO : what happens if 2 pix_video / pix_texture use the same PBO : will it be slower than using 2 different PBO? (since 2nd pix_texture have to wait for the PBO to be free in order to use it)???
same question with a pix_video / pix_texture and a pix_snap : is using 2 PBO lot's faster than using the same PBO (default behaviour)?
cheers Cyrille
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 2012-02-29 10:52, Cyrille Henry wrote:
Le 29/02/2012 09:34, IOhannes m zmoelnig a écrit :
and both [pix_texture] and [pix_snap] allow for asynchronous DMA-transfers already (though i only added PBO-tarnsfers to [pix_snap] a week ago or so). it's not really documented anywhere (yet), but you can send a [pbo $1( message to both these objects to specify the number of PBOs to use (with "0" being the default behaviour)
Do you mean that since last week pix_snap should be lot's faster than it use to be?
the default behaviour is still the same (pbo==0) this is mainly because i found that the optimal setting varies greatly from machine to machine. e.g. on my netbook with fglrx drivers, the old non-pbo method is somehow faster... on my desktop (with some old nvidia card), using PBOs is faster.
i'd like to understand a bit more the use of single / multiple PBO : what happens if 2 pix_video / pix_texture use the same PBO : will it be slower than using 2 different PBO? (since 2nd pix_texture have to wait for the PBO to be free in order to use it)???
each [pix_texture] will use their own set of PBOs.
when using more PBOs, you basically get a ring-buffer: while the current image is uploaded using PBO(n), PBO(n-1) is displayed, so the upload has one frametick to complete. it also means, you get a delay when using >1 PBOs.
i haven't done any benchmarking with multiple image-sources (though the PBO support for [pix_texture] was implemented in order to get reasonable speed when displaying multiple hires videos for an installation)
same question with a pix_video / pix_texture and a pix_snap : is using 2 PBO lot's faster than using the same PBO (default behaviour)?
just try it :-)
fgmasdr IOhannes
oh, i thought the pbo message was to use a specific pbo Id. since it's how many PBO to use, on a specific pix_texture or pix_snap, then my question was irrelevant. ok for the ring buffer: possible latency vs possible performance gain, and computer specific tuning.
i'll try that as soon as i can.
thx c
Le 29/02/2012 12:18, IOhannes m zmoelnig a écrit :
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 2012-02-29 10:52, Cyrille Henry wrote:
Le 29/02/2012 09:34, IOhannes m zmoelnig a écrit :
and both [pix_texture] and [pix_snap] allow for asynchronous DMA-transfers already (though i only added PBO-tarnsfers to [pix_snap] a week ago or so). it's not really documented anywhere (yet), but you can send a [pbo $1( message to both these objects to specify the number of PBOs to use (with "0" being the default behaviour)
Do you mean that since last week pix_snap should be lot's faster than it use to be?
the default behaviour is still the same (pbo==0) this is mainly because i found that the optimal setting varies greatly from machine to machine. e.g. on my netbook with fglrx drivers, the old non-pbo method is somehow faster... on my desktop (with some old nvidia card), using PBOs is faster.
i'd like to understand a bit more the use of single / multiple PBO : what happens if 2 pix_video / pix_texture use the same PBO : will it be slower than using 2 different PBO? (since 2nd pix_texture have to wait for the PBO to be free in order to use it)???
each [pix_texture] will use their own set of PBOs.
when using more PBOs, you basically get a ring-buffer: while the current image is uploaded using PBO(n), PBO(n-1) is displayed, so the upload has one frametick to complete. it also means, you get a delay when using>1 PBOs.
i haven't done any benchmarking with multiple image-sources (though the PBO support for [pix_texture] was implemented in order to get reasonable speed when displaying multiple hires videos for an installation)
same question with a pix_video / pix_texture and a pix_snap : is using 2 PBO lot's faster than using the same PBO (default behaviour)?
just try it :-)
fgmasdr IOhannes -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAk9OCY8ACgkQkX2Xpv6ydvRf7gCfWiihXE5UH+15Nvpo4HA09tKo Q5wAnRXfUhdc6IrhWkHOsKzo3KrF9uh7 =wCyQ -----END PGP SIGNATURE-----
Pd-dev mailing list Pd-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
On Wed, Feb 29, 2012 at 4:52 AM, Cyrille Henry ch@chnry.net wrote:
Le 29/02/2012 09:34, IOhannes m zmoelnig a écrit :
and both [pix_texture] and [pix_snap] allow for asynchronous
DMA-transfers already (though i only added PBO-tarnsfers to [pix_snap] a week ago or so). it's not really documented anywhere (yet), but you can send a [pbo $1( message to both these objects to specify the number of PBOs to use (with "0" being the default behaviour)
Do you mean that since last week pix_snap should be lot's faster than it use to be?
Years ago I spent some time with Apple, ATI and Nvidia trying to get the best pix_snap performance, but readback from the main backbuffer will always wait for all drawing to end and then transfer. There is no way not to call glFlush/glFinish either explicitly or implicitly on the main buffer to finish drawing. If you have some more drawing to do there might be an increase in performance, but for capturing an entire rendered image at the end of the drawing calls there is ultimately not much optimization possible.