Thoralf Schulze wrote:
hi,
Does using [auto $1< to play a video file with
pix_film take less CPU
than a counter that increments with every render?
On which platform? The answer is often 'yes' on
OSX. and on linux + w32 it is 'no' (being merely a shortcut for building your own counter)
However, I found that pix_film seems to decode frame(s) anytime a render command arrives at its inlet, even if it is the same frame as with the last render command (bah, twisted explanation ... )
is it? this would be bad (unless i thought of something very cool when i modified the sources to behave like that...)
the CVS versions of [pix_film]/[pix_movie] are now threaded and should behave much better. however it uses pthreads, so it is likely to not work with a standard windows build (but i guess you could do that with mingw)
Thinking about this again - could it be that uploading a texture to the gpu takes up some cpu cycles as well?
it surely does.
pix_buffer prevents these transfers as well, iirc.
it surely does not, unless i am totally mistaken.
if the image changes (or Gem _thinks_ that the image has changed) then it will be uploaded. in pix_film, everytime a "new" frame is grabbed, it is assumed that it changed. so when the decoding is done each render cycle, Gem eventually thinks that the image changed each render cycle and thus does duplicate texture uploads.
so fixing the duplicate decoding should help in both cases.
(now that i rethink it, i remember that for certain decoding APIs (namely mpeg, streams,...), the given frame-number does not necessarily correspond to the decoded frame-number...anyhow, this is why decided to let the decoder (e.g. filmAVI) decide whether it would want to re-decode the stream or not)
mfg.asdr IOhannes