that's great !!
thanks ;)
that's the first time i get an acceptable frame rate in PD and, that's funny because i was planning to make such a package myself.
by the way, do you plan to add audio support one of these days ( this might be my little contribution ).
for live video (v4l) it should work the normal way: if the capture card or camera supports audio you can use that audio as a pd adc input. right now the capture is running continuously in another thread and a frame is output when a bang is received. so the audio and video should be synchronized.
for the quicktime playback object, it is a bit different. the reason why is that the current qt playback object is frame based and externally synced using bangs, not stream based. so if you have some ideas on how to synchronize, let me know. one option could be to let it behave like sfread~ and let it output frames synced to the audio stream. but there are a lot more possibilities of course.
another thing i was thinking about solving this problem in pd with seperate media files for video and audio, and writing an abstraction to handle sync for different types of playback. i think this option is better since it is a lot more flexible..
so to answer you're question i am leaving the problem open for the moment, but feel free to come up with a nice solution of course ;)
so to answer you're question i am leaving the problem open for the moment, but feel free to come up with a nice solution of course ;)
i think the audio playback can be frame-based too, leaving the responsability to the patch to navigate in frames and sound accordingly, like playing sound from frame 578, then 23, etc, ...
audio should be loaded in a buffer and then read in real time with a classic dsp function, that's funny because mp3amp~ works this way already, it receives audio data from the netork and fills in a buffer.
i know the synchronisation would be loose but only well-trained eyes would be able to see any difference.
this would work well on fast machines only.
what do you think ?
best,
sevy/yves
yves,
i think the audio playback can be frame-based too, leaving the responsability to the patch to navigate in frames and sound accordingly, like playing sound from frame 578, then 23, etc, ...
audio should be loaded in a buffer and then read in real time with a classic dsp function, that's funny because mp3amp~ works this way already, it receives audio data from the netork and fills in a buffer.
i know the synchronisation would be loose but only well-trained eyes would be able to see any difference.
this would work well on fast machines only.
what do you think ?
this is certainly possible, but what would happen if the patch bangs at a rate that is slower than the frame rate?
solving it with a separate wav file that is read into an array (like mick's solution) still seems more appropriate to me for absolute control, since you could use tabread4~ to change the rate if you know the bang frequency and solve clicks and sound drops that way. i think foreseeing all the different playback modes inside a c external would be re-inventing the wheel, since chopping up sound in pd is a piece of cake.
however, i still think it would be valueable to have some kind of object that does playback like you suggest, but synced on the sound, not on displaying a frame when a bang comes in.
for example: if you would send it a 78 msg, it would start playing at frame 78, and ouput the sound stream (perhaps with resampling or playing backwards) and then start generating frames synced on the audio until you send it another frame message. that way you can still jump around the movie but with a very good sync to sound and have perfect "normal" playback.
so the pdp_qt object could have 2 modes: image mode: single frame output with external sync (like it is now) and audio/video mode: playback synced on the sound responding to start/stop/seek/rate messages. for things that need more chopping up control a method could be added that dumps the audio in a pd array.
tom
hi,
Tom Schouten wrote:
yves,
i think the audio playback can be frame-based too, leaving the responsability to the patch to navigate in frames and sound accordingly, like playing sound from frame 578, then 23, etc, ...
audio should be loaded in a buffer and then read in real time with a classic dsp function, that's funny because mp3amp~ works this way already, it receives audio data from the netork and fills in a buffer.
i know the synchronisation would be loose but only well-trained eyes would be able to see any difference.
this would work well on fast machines only.
what do you think ?
this is certainly possible, but what would happen if the patch bangs at a rate that is slower than the frame rate?
some pieces of silence, and silence is part of your art-work ))
solving it with a separate wav file that is read into an array (like mick's solution) still seems more appropriate to me for absolute control, since you could use tabread4~ to change the rate if you know the bang frequency and solve clicks and sound drops that way. i think foreseeing all the different playback modes inside a c external would be re-inventing the wheel, since chopping up sound in pd is a piece of cake.
this would mean you have to extract audio of all the videos you have on your hard disk. i think we can avoid this.
however, i still think it would be valueable to have some kind of object that does playback like you suggest, but synced on the sound, not on displaying a frame when a bang comes in.
for example: if you would send it a 78 msg, it would start playing at frame 78, and ouput the sound stream (perhaps with resampling or playing backwards) and then start generating frames synced on the audio until you send it another frame message. that way you can still jump around the movie but with a very good sync to sound and have perfect "normal" playback.
but i'm interested in "abnormal" playback too.
so the pdp_qt object could have 2 modes: image mode: single frame output with external sync (like it is now) and audio/video mode: playback synced on the sound responding to start/stop/seek/rate messages. for things that need more chopping up control a method could be added that dumps the audio in a pd array.
that's certainly not feasible to load a 10mn audio track in a pd array.
tom
this would mean you have to extract audio of all the videos you have on your hard disk. i think we can avoid this. ...
that's certainly not feasible to load a 10mn audio track in a pd array.
of course, but it is possible for smaller files. up to the user to not overload the system. plus it is very easy to add this functionality in a few lines of code, so i don't see who not. i would certainly have some use for it. a simple "exportaudio <array>" message would do.
then for audio playback inside the external:
this is certainly possible, but what would happen if the patch bangs at a rate that is slower than the frame rate?
some pieces of silence, and silence is part of your art-work )) ... but i'm interested in "abnormal" playback too.
i like the idea of having "granular" playback but i don't like to give up the possibility to do normal playback with perfect sync.. there has to be a simple way to have both possibilities: chunks & synced chunks.
what about having a "autoplay on/off" message that will switch between these two states. "autoplay off" would play only the chunk of the current frame and stop if no other frame is selected (by a float or bang incoming message). "autopay on" would play the other chunks and output the corresponding frames.
if a chunk's playback is finished, a simple test in the dsp function could determine if a new chunk has to be played and its corresponding frame sent to the outlet.
an "videosync ext/audio" message could dermine which sync state we are in:
ext: only react to a "frame x" or bang message, ouput corresponding frame and start playback of the audio chunk.
audio: save the next frame to be output and let the dsp routine descide when to output the frame and start playback of the chunk corresponding to that frame (when current audio chunk playback is done)
if avsync = ext and autoplay = on the audio could keep playing without outputting new video frames. if a new frame is requested, the audio playback immidiately switches to the new chunk.
that way you can have playback with silence, without silence, audio synced framerate and external synced framerate.
(another thing: having audio chunk sync would be good for limiting the the rate at which images are decoded. it is easy to overload the system trying to decode at about 500fps. right now i solve this by having a cold inlet that sets the next frame to play and a hot inlet that outputs this frame when a bang is received. so if you update the frame faster than the maximum sensible frame rate (determined by the bang rate), it drops frames... )
tom
Hi Tom and hi list
This is an abstraction (not an external) that i developed for a piece i did last year which does what you are asking. You are right in that the audio and video have to be two seperate files.
It's a bit of a mess, but it works well enough. I have performed live with a version of it and it's reliable. You don't have to use sync sound (although you can and it stays in sync (see below)). I have two other versions. One which uses time stretching so that pitch doesn't change (and random seeking doesn't result in mad speeded up playback), and one which uses millers Pvoc patch to process the audio, which is the best, but keeps crashing at the moment.
Hope that you can make sense of it and that it works properly. I have added some comments - hope they are of use.
Cheers for your interest
Mick
hi mick,
that's exactly what i had in mind. i'd like to have a look at your
external.
tom
Hi,
Saw this thread and remembered that I have an abstraction which does this.
The way I organised it, the audio for the file was given the same name as the video file (file1.avi = file1.avi.wav), and the message was passed onto the 'read' message (using list2symbol), loaded the file into the array(used -maxsize so array wouldn't truncate the file), calculated the total number of samples, divided it by 44.1 (or whatever the sample rate was) and then by the total number of frames in the video file. Then your frame number can be used to control video and audio whilst maintaining perfect sync. I had to adjust the start frame so that the delay in frame display could be factored out - the number of frames depends on the codec.
If you're interested in seeing it (it's a bit messy) i'll post it.
hope this is of interest to you.
Mick
----- Original Message ----- From: "Tom Schouten" doelie@zzz.kotnet.org To: "Yves Degoyon" ydegoyon@free.fr Cc: PD-dev@iem.kug.ac.at Sent: Monday, November 25, 2002 1:11 PM Subject: Re: [PD-dev] yet another video processing external...
this would mean you have to extract audio of all the videos you have on your hard disk. i think we can avoid this. ...
that's certainly not feasible to load a 10mn audio track in a pd array.
of course, but it is possible for smaller files. up to the user to not overload the system. plus it is very easy to add this functionality in a few lines of code, so i don't see who not. i would certainly have some use for it. a simple "exportaudio <array>" message would do.
then for audio playback inside the external:
this is certainly possible, but what would happen if the patch bangs at a rate that is slower than the frame rate?
some pieces of silence, and silence is part of your art-work )) ... but i'm interested in "abnormal" playback too.
i like the idea of having "granular" playback but i don't like to give up the possibility to do normal playback with perfect sync.. there has to be a simple way to have both possibilities: chunks & synced chunks.
what about having a "autoplay on/off" message that will switch between these two states. "autoplay off" would play only the chunk of the current frame and stop if no other frame is selected (by a float or bang incoming message). "autopay on" would play the other chunks and output the corresponding frames.
if a chunk's playback is finished, a simple test in the dsp function could determine if a new chunk has to be played and its corresponding frame sent to the outlet.
an "videosync ext/audio" message could dermine which sync state we are in:
ext: only react to a "frame x" or bang message, ouput corresponding frame and start playback of the audio chunk.
audio: save the next frame to be output and let the dsp routine descide when to output the frame and start playback of the chunk corresponding to that frame (when current audio chunk playback is done)
if avsync = ext and autoplay = on the audio could keep playing without outputting new video frames. if a new frame is requested, the audio playback immidiately switches to the new chunk.
that way you can have playback with silence, without silence, audio synced framerate and external synced framerate.
(another thing: having audio chunk sync would be good for limiting the the rate at which images are decoded. it is easy to overload the system trying to decode at about 500fps. right now i solve this by having a cold inlet that sets the next frame to play and a hot inlet that outputs this frame when a bang is received. so if you update the frame faster than the maximum sensible frame rate (determined by the bang rate), it drops frames... )
tom
PD-dev mailing list PD-dev@iem.kug.ac.at http://iem.kug.ac.at/cgi-bin/mailman/listinfo/pd-dev