Thanks for the link, I'll take a look when I have more time.
Ok so its Chromium itself that passes the texture data through the cluster. Indeed video would be more difficult, but with a 1Gbps multicast lan one should easily be able to distribute a DV stream to all machines... in theory! Are there multicast DV streamers???
The streaming part fits very well into TOT.
I've CCed Franz Hildgen and Simon Piette who are looking after the DV point-2-point application teleCHACHA.
Franz and Simon, we're talking about pd/Gem working in a cluster context, where the GL context is forwarded to a number of machines that each processes and projects one part of the image. This is very closely related to the lighTWIST and pixelTANGO integration problem.
Would it be possible to multicast a DV stream to all the cluster machines, so that each could use the stream in its portion of the final image?
Mike Wozniewski from McGill is looking at using chromium with pd/Gem for a cave application.
B>
Mike Wozniewski wrote:
Hey.
I did not see a link to any in depth documentation from chromium.
Check http://chromium.sourceforge.net/doc/index.html. Very comprehensive.
I'm not a c++ programmer so I'm not sure what it would involve to build these functions into GL wappers for Gem. I'm not sure how the functions latch onto an existing context, and how the whole thing works architecturally (what parts run on what machines, master slave connections etc..
Well, from the docs, it seems that we don't have to do anything at all to Gem. This is because Chromium disguises itself as the OpenGL library
- ie, when a GL instruction is made, Chromium intercepts and the regular
system OpenGL library sits idle.
I just wonder about performance, considering the speed of the AGP bus for texture transfers vs ethernet transport between the source and destination for the texture! especially if your talking about moving video...
So according to http://brighton.ncsa.uiuc.edu/%7Eprajlich/wall/ppb.html, when distributing video, the movie has to first be played in "write mode", where all textures are cached onto the disks across the cluster. Then subsequent playback is done in "read mode", where it's just read from local disk on each machine. I see problems with this in that all videos have to exist on disk first (no streaming from live cameras), the first playback is going to be SLOW, and if you have many many video clips this could be extremely annoying.
(are you guys even using video in your cave application?)
Not yet. But we will eventually want to put video avatars of remote participants into the world (ouch - this is not going to work with the above mentioned strategy).
-Mike