Hi Tom and hi list
This is an abstraction (not an external) that i developed for a piece i did last year which does what you are asking. You are right in that the audio and video have to be two seperate files.
It's a bit of a mess, but it works well enough. I have performed live with a version of it and it's reliable. You don't have to use sync sound (although you can and it stays in sync (see below)). I have two other versions. One which uses time stretching so that pitch doesn't change (and random seeking doesn't result in mad speeded up playback), and one which uses millers Pvoc patch to process the audio, which is the best, but keeps crashing at the moment.
Hope that you can make sense of it and that it works properly. I have added some comments - hope they are of use.
Cheers for your interest
Mick
hi mick,
that's exactly what i had in mind. i'd like to have a look at your
external.
tom
Hi,
Saw this thread and remembered that I have an abstraction which does this.
The way I organised it, the audio for the file was given the same name as the video file (file1.avi = file1.avi.wav), and the message was passed onto the 'read' message (using list2symbol), loaded the file into the array(used -maxsize so array wouldn't truncate the file), calculated the total number of samples, divided it by 44.1 (or whatever the sample rate was) and then by the total number of frames in the video file. Then your frame number can be used to control video and audio whilst maintaining perfect sync. I had to adjust the start frame so that the delay in frame display could be factored out - the number of frames depends on the codec.
If you're interested in seeing it (it's a bit messy) i'll post it.
hope this is of interest to you.
Mick
----- Original Message ----- From: "Tom Schouten" doelie@zzz.kotnet.org To: "Yves Degoyon" ydegoyon@free.fr Cc: PD-dev@iem.kug.ac.at Sent: Monday, November 25, 2002 1:11 PM Subject: Re: [PD-dev] yet another video processing external...
this would mean you have to extract audio of all the videos you have on your hard disk. i think we can avoid this. ...
that's certainly not feasible to load a 10mn audio track in a pd array.
of course, but it is possible for smaller files. up to the user to not overload the system. plus it is very easy to add this functionality in a few lines of code, so i don't see who not. i would certainly have some use for it. a simple "exportaudio <array>" message would do.
then for audio playback inside the external:
this is certainly possible, but what would happen if the patch bangs at a rate that is slower than the frame rate?
some pieces of silence, and silence is part of your art-work )) ... but i'm interested in "abnormal" playback too.
i like the idea of having "granular" playback but i don't like to give up the possibility to do normal playback with perfect sync.. there has to be a simple way to have both possibilities: chunks & synced chunks.
what about having a "autoplay on/off" message that will switch between these two states. "autoplay off" would play only the chunk of the current frame and stop if no other frame is selected (by a float or bang incoming message). "autopay on" would play the other chunks and output the corresponding frames.
if a chunk's playback is finished, a simple test in the dsp function could determine if a new chunk has to be played and its corresponding frame sent to the outlet.
an "videosync ext/audio" message could dermine which sync state we are in:
ext: only react to a "frame x" or bang message, ouput corresponding frame and start playback of the audio chunk.
audio: save the next frame to be output and let the dsp routine descide when to output the frame and start playback of the chunk corresponding to that frame (when current audio chunk playback is done)
if avsync = ext and autoplay = on the audio could keep playing without outputting new video frames. if a new frame is requested, the audio playback immidiately switches to the new chunk.
that way you can have playback with silence, without silence, audio synced framerate and external synced framerate.
(another thing: having audio chunk sync would be good for limiting the the rate at which images are decoded. it is easy to overload the system trying to decode at about 500fps. right now i solve this by having a cold inlet that sets the next frame to play and a hot inlet that outputs this frame when a bang is received. so if you update the frame faster than the maximum sensible frame rate (determined by the bang rate), it drops frames... )
tom
PD-dev mailing list PD-dev@iem.kug.ac.at http://iem.kug.ac.at/cgi-bin/mailman/listinfo/pd-dev