Hey Jhave,
My research group (www.rcc.ryerson.ca/synthops) has done a few similar peices.
We have not used tracking data in the room to mix, but have used tracking data from the video streams to effect params (like sound) "touch" is an example by another artist, it senses contact between video streams and generates sound from it.
We did use softVNS (and accessgrid for streaming)
SoftVNS is a very powerful tool, very well documented, very fast (much better than jitter), and very clear to use:
[v.movie] | [v.add 125] | [v.screen]
to add 125 to each pixel of a QT movie. softVNS is also incredably stable, (under OSX!) I would not know how to compare it to pidip since I've not got it working, but the concepts are very similar to PDP.
Ben ----- Original Message ----- From: "JHAVE" jhave@vif.com To: pd-list@iem.at Sent: Saturday, December 13, 2003 11:48 PM Subject: [PD] Re:what to do 2
one surprisingly wonderful example (which was announced here already) http://www.r4nd.org/rand_home.html
its like algorithm sleepbot pd's potential as generative anthropology is strong this site astounds me
has anyone made a room where video and sound are streamed in from online sources and mixed by motion within the room? pdpip : how does it as a tool compared to softvns?
PD-list mailing list PD-list@iem.at http://iem.at/cgi-bin/mailman/listinfo/pd-list