I actually have a number of ideas for how to deal with this, but haven't had any time to do it...it has been on my gem.todo.txt list for ~3 years now :-)
Are you looking for prebuilt objects? code snippets? thoughts and ideas? I did some primitive analysis back when I was working on SGIs, and got some meaningful numbers back...
Later, Mark
============================ = mdanks@stormfront.com = Lead Programmer PS2 = http://www.danks.org/mark ============================
-----Original Message----- From: Miller Puckette [mailto:mpuckett@man104-1.ucsd.edu] Sent: Thursday, June 14, 2001 2:03 PM To: greg paynter Cc: pure data Subject: Re: [PD] video midi
Hi Greg,
I don't think anyone's done that yet in Pd, but I'm hoping someone will...
cheers MIller
On Fri, Jun 15, 2001 at 02:36:28AM +1000, greg paynter wrote:
does anyone have any answers as to how to trigger a midi
input signal from
a video source??
has anyone considered writing a library for pd to include
processing midi
values based on pixel movement in a video source???
is anyone using a kinetic input to inspire a musical and
or gem visuals in
their performance work ,,
please any ideas greatfull????
thanks
I've been working on exactly this on and off for about three months. I was hesitant to post it to the list until things got more finalised, but I'd also hate for people to take off and duplicate efforts at this point. I've been very pleased to read on the list that there is so much interest in such a thing.
My primary concern with this project has been video tracking (i.e. taking a video stream in real time and outputting values that can be used to control audio or other media) though the system has also proven useful for basic 2D pixel-based graphics.
TECHNICAL DETAIL: My overall architecture is to pass video data as if they were very large blocks of audio data between objects. My video_in_rgb object, for instance, outputs three data streams, one for red, green and blue. Since the "data rate" for video is much higher than for audio, all the video processing objects have to be in their own subpatch where a block~ object is created behind the scenes to fake everything. Perhaps it's best to explain this by way of example: a 320x240 @ 15 fps greyscale video stream where each pixel is represented by one float requires 320 * 240 * 15 = 1152000 floats per second, which is much higher than the sampling rate of most audio hardware. So, to get video and audio running in the same Pd process harmoniously, I "fake" it with block overlapping. For simplicity of writing video objects, each "block" of samples contains one frame of video data. This becomes the first (block size) parameter for the block~ object. Then the overlap is set such that at the current audio sampling rate, the rate number of blocks (frames of video) are processed per second. The equation is:
pixels_per_frame / sampling_rate / frames_per_second
Of course, both these numbers have to be powers of two, so there's all kinds of round-off that makes this not always optimal. But anyway, that's the fundamental premise upon which everything else works.
A quick status report:
Video4Linux input support (I don't have access to other platforms, but maybe GEM's Windows and SGI video-in support (which I think it has) would be a good starting point.) I use a cheap Creative Labs USB webcam for everything and it works great.
Video out support uses the SDL library, because it's very portable and I didn't want to rely on something like OpenGL for output which may be is a bit of overkill in this case.
Only very basic objects are currently implemented, though many are in the works.
Basic thresholding kinds of things are there
A "blob" tracker, which finds the center of mass of an object, good for tracking it's location.
"Video delay" delays a video stream by a specified number of frames
snapshot, to store a single frame of video
image file i/o
colour conversion (RGB <-> HSV)
It's also important to note that my approach, using Pd audio streams for video data, means one can use any ~ object to process video data. For example *~ can be used to control the brightness of a stream. +~ mixes two streams. Visualizing things like osc~ can be fun, though the use is more limited.
There are example patches for colour tracking, motion tracking, and amount-of-motion sensing. All these work quite well on a P-III 700MHz laptop running Linux.
I've been extremely busy lately, but since there seems to be so much momentum here, I'll try to post some extremely-alpha "only-works-for-me" code by the end of the weekend. I'd love to hear comments and feedback on all this as well.
Glad to see the interest!
Michael Droettboom mdboom@peabody.jhu.edu 410.625.7596
Computer Music Research Peabody Conservatory of Music Johns Hopkins University
On Thu, 14 Jun 2001, Mark Danks wrote:
I actually have a number of ideas for how to deal with this, but haven't had any time to do it...it has been on my gem.todo.txt list for ~3 years now :-)
Are you looking for prebuilt objects? code snippets? thoughts and ideas? I did some primitive analysis back when I was working on SGIs, and got some meaningful numbers back...
Later, Mark
============================ = mdanks@stormfront.com = Lead Programmer PS2 = http://www.danks.org/mark ============================
-----Original Message----- From: Miller Puckette [mailto:mpuckett@man104-1.ucsd.edu] Sent: Thursday, June 14, 2001 2:03 PM To: greg paynter Cc: pure data Subject: Re: [PD] video midi
Hi Greg,
I don't think anyone's done that yet in Pd, but I'm hoping someone will...
cheers MIller
On Fri, Jun 15, 2001 at 02:36:28AM +1000, greg paynter wrote:
does anyone have any answers as to how to trigger a midi
input signal from
a video source??
has anyone considered writing a library for pd to include
processing midi
values based on pixel movement in a video source???
is anyone using a kinetic input to inspire a musical and
or gem visuals in
their performance work ,,
please any ideas greatfull????
thanks
hi michael,
this sounds very interresting to me in regards to our video project for jmax. maybe we can co-operate on this ??
would be nice if you can contact me.
thanks,
chris
Am Freitag, 15. Juni 2001 02:17 schrieb Michael Droettboom:
I've been working on exactly this on and off for about three months. I was hesitant to post it to the list until things got more finalised, but I'd also hate for people to take off and duplicate efforts at this point. I've been very pleased to read on the list that there is so much interest in such a thing.
My primary concern with this project has been video tracking (i.e. taking a video stream in real time and outputting values that can be used to control audio or other media) though the system has also proven useful for basic 2D pixel-based graphics.
TECHNICAL DETAIL: My overall architecture is to pass video data as if they were very large blocks of audio data between objects. My video_in_rgb object, for instance, outputs three data streams, one for red, green and blue. Since the "data rate" for video is much higher than for audio, all the video processing objects have to be in their own subpatch where a block~ object is created behind the scenes to fake everything. Perhaps it's best to explain this by way of example: a 320x240 @ 15 fps greyscale video stream where each pixel is represented by one float requires 320 * 240 * 15 = 1152000 floats per second, which is much higher than the sampling rate of most audio hardware. So, to get video and audio running in the same Pd process harmoniously, I "fake" it with block overlapping. For simplicity of writing video objects, each "block" of samples contains one frame of video data. This becomes the first (block size) parameter for the block~ object. Then the overlap is set such that at the current audio sampling rate, the rate number of blocks (frames of video) are processed per second. The equation is:
pixels_per_frame / sampling_rate / frames_per_second
Of course, both these numbers have to be powers of two, so there's all kinds of round-off that makes this not always optimal. But anyway, that's the fundamental premise upon which everything else works.
A quick status report:
Video4Linux input support (I don't have access to other platforms, but maybe GEM's Windows and SGI video-in support (which I think it has) would be a good starting point.) I use a cheap Creative Labs USB webcam for everything and it works great.
Video out support uses the SDL library, because it's very portable and I didn't want to rely on something like OpenGL for output which may be is a bit of overkill in this case.
Only very basic objects are currently implemented, though many are in the works.
Basic thresholding kinds of things are there
A "blob" tracker, which finds the center of mass of an object, good for tracking it's location.
"Video delay" delays a video stream by a specified number of frames
snapshot, to store a single frame of video
image file i/o
colour conversion (RGB <-> HSV)
It's also important to note that my approach, using Pd audio streams for video data, means one can use any ~ object to process video data. For example *~ can be used to control the brightness of a stream. +~ mixes two streams. Visualizing things like osc~ can be fun, though the use is more limited.
There are example patches for colour tracking, motion tracking, and amount-of-motion sensing. All these work quite well on a P-III 700MHz laptop running Linux.
I've been extremely busy lately, but since there seems to be so much momentum here, I'll try to post some extremely-alpha "only-works-for-me" code by the end of the weekend. I'd love to hear comments and feedback on all this as well.
Glad to see the interest!
Michael Droettboom mdboom@peabody.jhu.edu 410.625.7596
Computer Music Research Peabody Conservatory of Music Johns Hopkins University
On Thu, 14 Jun 2001, Mark Danks wrote:
I actually have a number of ideas for how to deal with this, but haven't had any time to do it...it has been on my gem.todo.txt list for ~3 years now
:-)
Are you looking for prebuilt objects? code snippets? thoughts and ideas? I did some primitive analysis back when I was working on SGIs, and got some meaningful numbers back...
Later, Mark
============================ = mdanks@stormfront.com = Lead Programmer PS2 = http://www.danks.org/mark ============================
-----Original Message----- From: Miller Puckette [mailto:mpuckett@man104-1.ucsd.edu] Sent: Thursday, June 14, 2001 2:03 PM To: greg paynter Cc: pure data Subject: Re: [PD] video midi
Hi Greg,
I don't think anyone's done that yet in Pd, but I'm hoping someone will...
cheers MIller
On Fri, Jun 15, 2001 at 02:36:28AM +1000, greg paynter wrote:
does anyone have any answers as to how to trigger a midi
input signal from
a video source??
has anyone considered writing a library for pd to include
processing midi
values based on pixel movement in a video source???
is anyone using a kinetic input to inspire a musical and
or gem visuals in
their performance work ,,
please any ideas greatfull????
thanks
On Thu, 14 Jun 2001, Michael Droettboom wrote:
TECHNICAL DETAIL: My overall architecture is to pass video data as if they were very large blocks of audio data between objects. My video_in_rgb object, for instance, outputs three data streams, one for red, green and blue.
All of this sounds great, I do have one question though. Why did you choose to implement your own data processing concept, instead of using the pix objects from gem ?
Image data is inherently different from audio data, therefore the gain you have from being able to reuse the pd signal processing objects (which are optimized for audio calculations) doesn't seem to be worth it.
Or put the other way, what was it that you didn't like about them GEM way ?
Guenter
First let me explain (and perhaps apologize in advance) that part of my motivation for this project was just to see if it could be done, and for the thrill of doing it. So my "abandoning" of GEM may have been premature. However, looking back I do feel there are some differences about my system that I like, though other's opinions may differ.
On Fri, 15 Jun 2001, guenter geiger wrote:
On Thu, 14 Jun 2001, Michael Droettboom wrote:
TECHNICAL DETAIL: My overall architecture is to pass video data as if they were very large blocks of audio data between objects. My video_in_rgb object, for instance, outputs three data streams, one for red, green and blue.
All of this sounds great, I do have one question though. Why did you choose to implement your own data processing concept, instead of using the pix objects from gem ?
Well, in a sense, I didn't really implement my own -- I'm using Pd's. But I do see your point.
Image data is inherently different from audio data, therefore the gain you have from being able to reuse the pd signal processing objects (which are optimized for audio calculations) doesn't seem to be worth it.
I don't agree that image data is inherently different from audio data. It's only in the interpretation that they're different.
The most basic objects -- the ones that are useful to both audio and video processing -- such as +~ and *~ are not optimised for audio calculations. They're simply optimised for floats.
Or put the other way, what was it that you didn't like about them GEM way ?
My only complaint (and it's more of a preference) is that GEM pix objects support multiple types of video data (RGB and grey). I feel it adds a good deal of flexibility to a system when all the data streams are essentially the same and completely interchangable. In hindsight, I could get such behaviour from GEM by simply writing deinterleave and interleave objects (I couldn't find any, but I may be missing something.)
All my objects are fairly well modularized, and having just now revisited the GEM source code, it shouldn't be too much work to wrap them in the pix style.
hi guenter and the others ! .....
Am Freitag, 15. Juni 2001 21:35 schrieb guenter geiger:
On Thu, 14 Jun 2001, Michael Droettboom wrote:
TECHNICAL DETAIL: My overall architecture is to pass video data as if they were very large blocks of audio data between objects. My video_in_rgb object, for instance, outputs three data streams, one for red, green and blue.
All of this sounds great, I do have one question though. Why did you choose to implement your own data processing concept, instead of using the pix objects from gem ?
Image data is inherently different from audio data, therefore the gain you have from being able to reuse the pd signal processing objects (which are optimized for audio calculations) doesn't seem to be worth it.
Or put the other way, what was it that you didn't like about them GEM way ?
Guenter
just my thoughts about it :
were currently implementing video into jmax, and i follow a similar way. what i do is to split the 32 bit of a float into a union of 4x8 bit for rgba, called pixel_t these are "flowing around" like the normal audio data, but in a seperate chain, called vdsp.
the advantage is that you can do most of the calculations on a stream rather than on a whole picture. imagine the following situation :
one video source, some calculations and maybe 4 effects (still on one video of course....) most of the time nothing happens, but if you use whole images, then you have to do *all* calculations on a whole image, spending lot of time *momentarily*...... if you stream them, for example for additions etc, you only have to calc, lets say, 2048 pixels (together with a smaller number of audio samples....) this way you can "distribute" the computation power needed across small vectors of pixels.....and not a whole image.....
if you need calculations that work only on a whole image (like distortions or so...) you can always store one whole image in a buffer to apply calculations on it in the time.......
hope to cleared that a little ??? (if anyone has interrest, please contact.....)
greets,
chris