On 08/08/10 20:12, Dima Strakovsky wrote:
Hi all,
Coming out of the lurker mode to ask a question here :) I am kicking around an idea for a work that would require four camera inputs. The video streams would be remixed in realtime and output via a single projector. Was wondering if anyone has played with the scenario and has some advice to offer?
My solution for this was using a PCI framegrabber card with 4 video chips, allowing for 16 video inputs into 4 buses, each 640x480. This was what prompted me to switch to Linux as there were no such cards with OSX drivers available. Using GEM I was able to use the 4 video chips simultaneously, and could use alpha blending with at least 12 layers of video if I also played back video files. More than 4 cameras simultaneously was also possible if I only wanted quite low framerates, but mainly the extra inputs allowed switching between several cameras and video inputs from within GEM.
This was six years ago and there are more choices available now. Note that generally if using composite video very little, if anything, is gained by using a resolution better than 320x240, since that is about as much information as is available in most composite systems and GEM, or rather the graphics card, scales up very smoothly if set correctly, so you can mix with higher resolution video files very well.
My prototype was on OSX with a single firewire video input, using DV, but I abandoned this approach because firewire [at least when using DV] has a substantial latency built in [about 8 frames]. It is not optimised for live use but rather for reliably transferring video from tapes. With the framegrabber the cameras were not synced so latency varied between about 0.5 and 1.5 frames [in a serious digital video mixer the cameras are synced together and the latency is fixed at 1 frame].
Using multiple input machines could work with some kind of streaming, but there would be latency and compression issues to consider, while if you get an appropriate framegrabber then the frames can be passed on to GEM quite efficiently so not much CPU is used, and all the moving around of pix data is over fast internal buses and the alpha blending is done in the graphics card GPU.
I added a second graphics card later, so eventually I had 4 DVI outputs and 4x4 video inputs, this worked smoothly and it would have been very hard move all that data between different machines if I had used streaming instead.
The framegrabbers were from Euresys, the graphics cards were Nvidia, but the choices would probably be quite different now, six years later!
Simon
Dmitry "Dima" Strakovsky Assistant Professor of Intermedia University of Kentucky http://www.shiftingplanes.org
PS I lived in Lexington when I was little, while my dad was working at that University for 3 years, but that was a very long time ago.