To Pd developers,
After years of worrying about it, I'm thinking it's time to add video support natively to Pd. My basic idea is to add a feature to the block~ object so that windows of DSP computation can be triggered from external messages. That way, video I/O objects could be designed to spit out frames or portions of frames on demand, and the user gets the ability to explicitly schedule how the video computations should be run.
(My favorite example of non-obvious scheduling of video computation is low-latency analysis of incoming video, where you can actually use the 1/60-second fields without having to wait the additional 1/60-th second for a complete frame to arrive.)
I think there will have to be some state added to the DSP chain mechanism so that objects explicitly designed to work on video streams can find out what part of the image they're looking at. This could be as simple as a global data structure describing the current vector's position in an image; existing tilde objects would simply ignore the information.
To begin with at least, I'm hoping to be able to do all video operations except storage using floating-point numbers, thereby re-using the usual tilde objects. The disadvantage of this approach is that, if you want to "sample" an image, to get decent cache behavior you'd want the possibility of storing it in a data-reduced way, as 8-bit integers or perhaps using YUYV packing. (for example, a 512x768 color video frame takes almost 5MB to store in floating point, but only about 0.7 MB in 8-bit YUYV.)
So there will probably have to be a suite of new objects for storing 2D arrays in fixed point formats, variously trading off memory compaction with processing time needed to get the data in and out. There will probably also want to be a choice of interpolation strategies. Probably the design can look like the table/tabwrite/tabread/tabread4/tabwrite~/tabread~/tabread4~ objects.
Also like "table" objects I want to publish an API so that pdp and gem can read and write into image storage buffers.
I'll probably base the actual video I/O objects on the way Tom Shouten did it in pdp - perhaps pdp would then be able to use pd's video objects directly if I can get the design just right.
ideas and opinions welcome!
cheers Miller
hello miller,
Am Samstag, 27. Mai 2006 19:40 schrieb Miller Puckette:
To Pd developers,
[...snip...]
To begin with at least, I'm hoping to be able to do all video operations except storage using floating-point numbers, thereby re-using the usual tilde objects. The disadvantage of this approach is that, if you want to "sample" an image, to get decent cache behavior you'd want the possibility of storing it in a data-reduced way, as 8-bit integers or perhaps using YUYV packing. (for example, a 512x768 color video frame takes almost 5MB to store in floating point, but only about 0.7 MB in 8-bit YUYV.)
So there will probably have to be a suite of new objects for storing 2D arrays in fixed point formats, variously trading off memory compaction with processing time needed to get the data in and out. There will probably also want to be a choice of interpolation strategies. Probably the design can look like the table/tabwrite/tabread/tabread4/tabwrite~/tabread~/tabread4~ objects.
uhm .... are you sure about that? first, it doesnt make much sense to use floats on images. signed 16 bit integers would be _more_ than sufficient to represent a colour channel in a pixel. even film footage doesnt have that dynamic range.
using floats would make _every_ processing of video in realtime almost impossible, at least on current machines. its just too much data if you really want to do some f/x and stuff with it.....
secondly, what do you gain by _storing_ them in the native format, but _using_ them as floats? exactly nothing, instead you have the big penality of converting back and forth each time. add to that the fact that ram prices are really low, so there is no problem anymore to put 2 gigabyte or more into your computer.
instead it could pass a void* pointer to the data, that gets casted by the processing object to the format it has, and at the same time supplying an additional field, like a struct streaminfo{ width, height, current_pos, format, whatever.... } that gets evaluated then .... this would also allow for streams with different sizes to be used. imagine a stream of a 768x576 video where you add a 320x240 image ....
[...snip...]
ideas and opinions welcome!
cheers Miller
that was my 0.02 euro-cent .... ;)
greets,
chris
PD-dev mailing list PD-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
I used to think the same thing, that floating-point computation would never be feasible on video... but Mathieu has demonstrated that, if you schedule the computations carefully, you can do blindingly fast video crunching in floating-point. Trouble with it is simply that it takes much more storage space, so you end up using a lot of memory bandwidth (a read or write to external memory costs the same as tens or hundreds of CPU cycles these days).
But anyway, I'll probably make it conditionally compilable in other formats so we can just benchmark it and find out for sure.
cheers Miller
On Sat, May 27, 2006 at 08:02:12PM +0200, Christian Klippel wrote:
hello miller,
Am Samstag, 27. Mai 2006 19:40 schrieb Miller Puckette:
To Pd developers,
[...snip...]
To begin with at least, I'm hoping to be able to do all video operations except storage using floating-point numbers, thereby re-using the usual tilde objects. The disadvantage of this approach is that, if you want to "sample" an image, to get decent cache behavior you'd want the possibility of storing it in a data-reduced way, as 8-bit integers or perhaps using YUYV packing. (for example, a 512x768 color video frame takes almost 5MB to store in floating point, but only about 0.7 MB in 8-bit YUYV.)
So there will probably have to be a suite of new objects for storing 2D arrays in fixed point formats, variously trading off memory compaction with processing time needed to get the data in and out. There will probably also want to be a choice of interpolation strategies. Probably the design can look like the table/tabwrite/tabread/tabread4/tabwrite~/tabread~/tabread4~ objects.
uhm .... are you sure about that? first, it doesnt make much sense to use floats on images. signed 16 bit integers would be _more_ than sufficient to represent a colour channel in a pixel. even film footage doesnt have that dynamic range.
using floats would make _every_ processing of video in realtime almost impossible, at least on current machines. its just too much data if you really want to do some f/x and stuff with it.....
secondly, what do you gain by _storing_ them in the native format, but _using_ them as floats? exactly nothing, instead you have the big penality of converting back and forth each time. add to that the fact that ram prices are really low, so there is no problem anymore to put 2 gigabyte or more into your computer.
instead it could pass a void* pointer to the data, that gets casted by the processing object to the format it has, and at the same time supplying an additional field, like a struct streaminfo{ width, height, current_pos, format, whatever.... } that gets evaluated then .... this would also allow for streams with different sizes to be used. imagine a stream of a 768x576 video where you add a 320x240 image ....
[...snip...]
ideas and opinions welcome!
cheers Miller
that was my 0.02 euro-cent .... ;)
greets,
chris
PD-dev mailing list PD-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
PD-dev mailing list PD-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
hello,
Am Samstag, 27. Mai 2006 21:03 schrieb Miller Puckette:
I used to think the same thing, that floating-point computation would never be feasible on video... but Mathieu has demonstrated that, if you schedule the computations carefully, you can do blindingly fast video crunching in floating-point. Trouble with it is simply that it takes much more storage space, so you end up using a lot of memory bandwidth (a read or write to external memory costs the same as tens or hundreds of CPU cycles these days).
uhm.... blindingly fast? i dont believe that .... unless you use sse/sse2 for the floating point ops. and even then you would be slower. the alu for integer math is much quicker and has a lower latency as the fpu. also, on a p4 for example, you have actually 2 alu's for integer stuff ....
to put it in other words: how comes that demuxing/converting a lot of audio channels (for example from a hammerfall) takes a considerable amount of cpu, if using floats would be blindingly fast? and even on 16 channels of audio you dont have that much data as with a simple full pal-sized video ....
But anyway, I'll probably make it conditionally compilable in other formats so we can just benchmark it and find out for sure.
why conditional? why not runtime-selectable, per branch/subpatch? i mean, there are quite some formats actively in use, like rgb(a), yuv, hsb ...
cheers Miller
greets,
chris
....
But anyway, I'll probably make it conditionally compilable in other formats so we can just benchmark it and find out for sure.
why conditional? why not runtime-selectable, per branch/subpatch? i mean, there are quite some formats actively in use, like rgb(a), yuv, hsb ...
[]
greets,
chris
The trouble I see with supporting a bunch of formats like yuv is that you end up writing separate code for each format... and I'm trying hard to keep Pd as small and clean as possible. I think that if the API is carefully enough designed, it should be possible to use external libraries to handle specific video formats. This is a different issue from what data type to use for the individual numbers (float or char, for example).
I think that allowing a compile-time typedef to set the data type of a single video sample should add only a small amount of code, and then if it turns out that one format really is better than the others, there's no need to support run-time selection. On the other hand, if it turns out that each sample format has advantages and disadvantages, then it might be necessary to go all out and support polymorphic video sample types. That would be ugly and I'm hoping to avoid it if at all possible.
cheers M
PD-dev mailing list PD-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
On Sat, 27 May 2006, Miller Puckette wrote:
I think that allowing a compile-time typedef to set the data type of a single video sample should add only a small amount of code, and then if it turns out that one format really is better than the others, there's no need to support run-time selection. On the other hand, if it turns out that each sample format has advantages and disadvantages, then it might be necessary to go all out and support polymorphic video sample types. That would be ugly and I'm hoping to avoid it if at all possible.
It has already been determined that each of many sample formats has advantages and disadvantages. What you need to do is pick some number of sample formats and pretend that you're giving the user enough choice.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
On Sat, 27 May 2006, Miller Puckette wrote:
I used to think the same thing, that floating-point computation would never be feasible on video... but Mathieu has demonstrated that, if you schedule the computations carefully, you can do blindingly fast video crunching in floating-point.
I don't know where I demonstrated that, cause I never implemented SSE1 support in GridFlow, and I don't know what else would qualify as "blindingly". Even though GridFlow supports float images since jan.2003, I didn't use the feature until this year, when I started making video FFTW. If you want an example of something blindingly fast, look at FFTW (which uses both SSE1 and great algorithms).
The gap between the speed of floating-point and integer computations has been shrinking over the years, but in the last 10 years it has also expanded and shrunk back and re-expanded and... it's difficult to know where things are heading. I'm sticking with integers as much as I can, with small integers whenever it's worth it, and with floats when I have no other option (e.g. FFTW).
much more storage space, so you end up using a lot of memory bandwidth (a read or write to external memory costs the same as tens or hundreds of CPU cycles these days).
It's difficult to really measure it, because when such a read happens, a whole chunk is read at once, some of which can be useful later on, if the cache keeps it long enough, etc. And then, there are often a dozen instructions running at once in various stages of completion, each taking a dozen ticks, all documented as single-tick instructions because engineers figure a dozen divided by a dozen is one. Analysis of computation time really can't be done the same as it was being done in 1990.
But anyway, I'll probably make it conditionally compilable in other formats so we can just benchmark it and find out for sure.
Yes, I approve that.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
On Sat, 27 May 2006, Mathieu Bouchard wrote:
But anyway, I'll probably make it conditionally compilable in other formats so we can just benchmark it and find out for sure.
Yes, I approve that.
I mean I approve the benchmarking part. Making things conditionally compilable is prolly not a good idea unless you have a really good reason. I agree with CK.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
Hi Miller,
To begin with at least, I'm hoping to be able to do all video operations except storage using floating-point numbers,
Does this also mean that you will be finally accepting the SIMD extensions? Otherwise i can't figure out how video should be realistically working, cpu-wise.
Concerning fat externals, i would rather prefer a more general solution to tackle the variety of architectures available... i386 vs. PIII vs. P4 vs. AMD64 vs. G3 vs. G4 vs. G5 / win vs. linux vs. macos It would be extremely nice and convenient if an architecture-specific optimized binary could be loaded from a really-fat external bundle, as suggested several months ago.
best greetings, Thomas
On Sat, 27 May 2006, Miller Puckette wrote:
To begin with at least, I'm hoping to be able to do all video operations except storage using floating-point numbers, thereby re-using the usual tilde objects. The disadvantage of this approach is that, if you want to "sample" an image, to get decent cache behavior you'd want the possibility of storing it in a data-reduced way, as 8-bit integers or perhaps using YUYV packing. (for example, a 512x768 color video frame takes almost 5MB to store in floating point, but only about 0.7 MB in 8-bit YUYV.)
I sort of have the same feelings as Christian and others. If you do video processing, you probably want it to be as fast as possible, because it still takes a considerable amount of a modern CPU's power. Considering that most building blocks will be different from the ones that are used in audio processing, you will have to reimplement them anyhow, and the best would be to implement them as fast as possible.
So there will probably have to be a suite of new objects for storing 2D arrays in fixed point formats, variously trading off memory compaction with processing time needed to get the data in and out. There will probably also want to be a choice of interpolation strategies. Probably the design can look like the table/tabwrite/tabread/tabread4/tabwrite~/tabread~/tabread4~ objects.
Also like "table" objects I want to publish an API so that pdp and gem can read and write into image storage buffers.
I'll probably base the actual video I/O objects on the way Tom Shouten did it in pdp - perhaps pdp would then be able to use pd's video objects directly if I can get the design just right.
I am not sure if I understand what you mean with video I/O objects, but what I understand (output to screen and input from video sources) is the part of pdp that I think is not well designed. I find it strange having to use pdp_xv or pdp_glx as output object, depending on what hardware I have. This decision should be taken by the system, or changeable by messages, not hardcoded in a patch.
ideas and opinions welcome!
If I understand correctly, what you want to do is line by line processing for images. I am not a video specialist, by this might make some algorithms a lot harder to implement. Do you have some ideas about how a convolution with a matrix would look like, for the user and the internal implementation ?
And then, asuming that you have to use a compact representation of the image in memory, because otherwise cache behaviour will destroy performance completely, you would also have to pack and unpack the image data on every step for algorithms that are based on different frames in time.
Well, sorry if I didn't really get what it is all about, but I also try to figure out why we need yet another video extension to pd, instead of fixing the ones that exist.
Couldn't we just incorporate pdp and make it run on windows and macosx ?
Cheers, Günter
cheers Miller
PD-dev mailing list PD-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
hello all,
Am Sonntag, 28. Mai 2006 16:26 schrieb geiger:
[...snip...
If I understand correctly, what you want to do is line by line processing for images. I am not a video specialist, by this might make some algorithms a lot harder to implement. Do you have some ideas about how a convolution with a matrix would look like, for the user and the internal implementation ?
when transfering only parts of an image, such objects need to buffer the incomming image, process them when complete, and then send them out in chinks again. depending on the needed y range that adds up to a whole frame of delay.
otoh, when only processing small chunks of a stream at a time, which you can with quite some operations/effects, you can speed up processing a lot because it is very likely that the processed data is held in the cache throughout the processing chain.
an ideal system would allow for different methods/sizes at once. that way you can do, for example, big convolutions on a whole-frame basis, while at the same time you can do very perfomant mixing, blending, etc... of streams in chunks, where applicable.
i would say that most video stuff can be done in blocks/lines/however-you-call-it .... even motion tracking is possible that way.
greets,
chris
On Sun, 28 May 2006, Christian Klippel wrote:
otoh, when only processing small chunks of a stream at a time, which you can with quite some operations/effects, you can speed up processing a lot because it is very likely that the processed data is held in the cache throughout the processing chain. an ideal system would allow for different methods/sizes at once. that way you can do, for example, big convolutions on a whole-frame basis, while at the same time you can do very perfomant mixing, blending, etc... of streams in chunks, where applicable.
This sounds a lot like GridFlow except for the fact that GridFlow isn't particularly fast. A well-rewritten version of GridFlow could be the fastest video-plugin in the west.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
On May 28, 2006, at 4:26 PM, geiger wrote:
On Sat, 27 May 2006, Miller Puckette wrote:
To begin with at least, I'm hoping to be able to do all video operations except storage using floating-point numbers, thereby re-using the usual tilde objects. The disadvantage of this approach is that, if you want to "sample" an image, to get decent cache behavior you'd want the possibility of storing it in a data-reduced way, as 8-bit integers or perhaps using YUYV packing. (for example, a 512x768 color video frame takes almost 5MB to store in floating point, but only about 0.7 MB in 8-bit YUYV.)
I sort of have the same feelings as Christian and others. If you do video processing, you probably want it to be as fast as possible, because it still takes a considerable amount of a modern CPU's power. Considering that most building blocks will be different from the ones that are used in audio processing, you will have to reimplement them anyhow, and the best would be to implement them as fast as possible.
Since we already have GEM and PDP, it seems that there should be more leeway for experimentation in terms of a floating point video format for Pd. I think that it would at the very least be an interesting experiment to try to have a unified system for handling media data. Yes it will be slow, but in a couple of years, it could be something really unique.
If you need fast, you could use GEM or PDP in the meantime. Speaking of, how about releasing that PDP port for Windows?
.hc
So there will probably have to be a suite of new objects for storing 2D arrays in fixed point formats, variously trading off memory compaction with processing time needed to get the data in and out. There will probably also want to be a choice of interpolation strategies. Probably the design can look like the table/tabwrite/tabread/tabread4/tabwrite~/tabread~/ tabread4~ objects.
Also like "table" objects I want to publish an API so that pdp and gem can read and write into image storage buffers.
I'll probably base the actual video I/O objects on the way Tom Shouten did it in pdp - perhaps pdp would then be able to use pd's video objects directly if I can get the design just right.
I am not sure if I understand what you mean with video I/O objects, but what I understand (output to screen and input from video sources) is the part of pdp that I think is not well designed. I find it strange having to use pdp_xv or pdp_glx as output object, depending on what hardware I have. This decision should be taken by the system, or changeable by messages, not hardcoded in a patch.
ideas and opinions welcome!
If I understand correctly, what you want to do is line by line processing for images. I am not a video specialist, by this might make some algorithms a lot harder to implement. Do you have some ideas about how a convolution with a matrix would look like, for the user and the internal implementation ?
And then, asuming that you have to use a compact representation of the image in memory, because otherwise cache behaviour will destroy performance completely, you would also have to pack and unpack the image data on every step for algorithms that are based on different frames in time.
Well, sorry if I didn't really get what it is all about, but I also try to figure out why we need yet another video extension to pd, instead of fixing the ones that exist.
Couldn't we just incorporate pdp and make it run on windows and macosx ?
Cheers, Günter
cheers Miller
PD-dev mailing list PD-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
PD-dev mailing list PD-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
________________________________________________________________________ ____
"I have the audacity to believe that peoples everywhere can have three meals a day for their bodies, education and culture for their minds, and dignity, equality and freedom for their spirits." - Martin Luther King, Jr.
On Jun 3, 2006, at 3:27 AM, Yves Degoyon wrote:
ola,
Speaking of, how about releasing that PDP port for Windows?
.
please no.. thank you.
sevy
That's where we have differing opinions on free software. I think that free software should be totally free, and I think that the more free software is available for Windows, the more we chip away at Microsoft. Plus, now there is ReactOS, which is a free Windows.
Luckily for those of us who agree with me, PDP is released under the GNU GPL, which means that it is free software, and its free to be ported to Windows.
.hc
________________________________________________________________________ ____
Man has survived hitherto because he was too ignorant to know how to realize his wishes. Now that he can realize them, he must either change them, or perish. -William Carlos Williams
..on Sat, Jun 03, 2006 at 02:21:47PM +0200, Hans-Christoph Steiner wrote:
On Jun 3, 2006, at 3:27 AM, Yves Degoyon wrote:
ola,
Speaking of, how about releasing that PDP port for Windows?
.
please no.. thank you.
sevy
That's where we have differing opinions on free software. I think that free software should be totally free, and I think that the more free software is available for Windows, the more we chip away at Microsoft. Plus, now there is ReactOS, which is a free Windows.
if only it were that simple ;)
i'm not *ideologically opposed* to Windows ports of Linux software, so much as disappointed whenever it occurs.
porting software to a platform is supporting that platform - you're affirming that the port target, in this case Windows, is deserving of the software. you're also widening the possible user base on that platform, enriching it and increasing the possibility of user investment.
to say it's 'chipping away at microsoft' is stretching it a bit i think, 'Jitter' perhaps. i know several artists that have installed Linux to run PDP, something they seem to be pretty happy about now.
some say ports of popular mail clients and office software may make sense in that the user becomes familiar with software that will run on a non-proprietary platform, making it easier for them to switch later. this may work in the 'Enterprise' where cost-cutting is the primary drive to the adoption of a free OS and application criteria is not especially diverse.
in the case of multimedia applications, i have yet to see that hypothesis pay off. where digital artists are concerned many seem to prioritise application diversity over anything else. because there is a *comparitive* lack of multimedia software on Linux (though more than enough for me) many cling to proprietary platforms like Windows and OSX, chosing to mix proprietary multimedia applications (they've usually stolen) with those of free software. so, because Linux has less 'artists', less software is made only for Linux, and so less artists use it and commit to free-software as a whole. chicken meet egg..
my loose change,
julian
On Jun 3, 2006, at 4:54 PM, Julian Oliver wrote:
..on Sat, Jun 03, 2006 at 02:21:47PM +0200, Hans-Christoph Steiner wrote:
On Jun 3, 2006, at 3:27 AM, Yves Degoyon wrote:
ola,
Speaking of, how about releasing that PDP port for Windows?
.
please no.. thank you.
sevy
That's where we have differing opinions on free software. I think that free software should be totally free, and I think that the more free software is available for Windows, the more we chip away at Microsoft. Plus, now there is ReactOS, which is a free Windows.
if only it were that simple ;)
i'm not *ideologically opposed* to Windows ports of Linux software, so much as disappointed whenever it occurs.
porting software to a platform is supporting that platform - you're affirming that the port target, in this case Windows, is deserving of the software. you're also widening the possible user base on that platform, enriching it and increasing the possibility of user investment.
to say it's 'chipping away at microsoft' is stretching it a bit i think, 'Jitter' perhaps. i know several artists that have installed Linux to run PDP, something they seem to be pretty happy about now.
some say ports of popular mail clients and office software may make sense in that the user becomes familiar with software that will run on a non-proprietary platform, making it easier for them to switch later. this may work in the 'Enterprise' where cost-cutting is the primary drive to the adoption of a free OS and application criteria is not especially diverse.
in the case of multimedia applications, i have yet to see that hypothesis pay off. where digital artists are concerned many seem to prioritise application diversity over anything else. because there is a *comparitive* lack of multimedia software on Linux (though more than enough for me) many cling to proprietary platforms like Windows and OSX, chosing to mix proprietary multimedia applications (they've usually stolen) with those of free software. so, because Linux has less 'artists', less software is made only for Linux, and so less artists use it and commit to free-software as a whole. chicken meet egg..
I think you overlook an aspect of free software that is important to me: the freedom to choose which software you want to use. Who I am to tell anyone they should not use a given software? Some people like Windows so much that they are making a free version of it: http://reactos.org More power to them.
I am a long time, staunch advocate of free software, but I have to say that I am not a big fan of GNU/Linux. When you have Linus Torvalds dissing GNOME because they are trying to make software that is usable for non-hackers, that is really what stops people from using GNU/Linux. That attitude, and those design ideas keep GNU/ Linux to a small audience. There is no problem there, there should be choice. What about FreeBSD, OpenBSD, NetBSD, Darwin, ReactOS, Plan9, FreeDOS, xMach, OpenSolaris, etc.?
.hc
________________________________________________________________________ ____
As we enjoy great advantages from inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously. - Benjamin Franklin
On Sat, 3 Jun 2006, Hans-Christoph Steiner wrote:
Since we already have GEM and PDP, it seems that there should be more leeway for experimentation in terms of a floating point video format for Pd.
Since we already have GridFlow, which already supports floating point video, [insert conclusion here].
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
GEM already has two objects for turning audio into video and back again - pix_sig2pix and pix_pix2sig. How is this proposal different from what those objects do already? Would changing the block~ object and using the pix_ objects implement the system as proposed?
cgc
On 5/27/06, Miller Puckette <mpuckett@imusic1.ucsd.edu > wrote:
To Pd developers,
After years of worrying about it, I'm thinking it's time to add video support natively to Pd. My basic idea is to add a feature to the block~ object so that windows of DSP computation can be triggered from external messages. That way, video I/O objects could be designed to spit out frames or portions of frames on demand, and the user gets the ability to explicitly schedule how the video computations should be run.
excellent question... it might prove that all I'd need, besides the changes to block~, would be to write a larger suite of objects based on pix_sig2pix and pix_pix2sig (for a wide variety of access styles and interpolations).
By the way, has anyone ever fixed pix_texture so that it can take non-power-of-two image sizes? (I assume it's still true that pix_texture is the fastest way to display an image - if that's still true, it would be important to support at least arbitrary rectangles...)
thanks Miller
On Sat, Jun 03, 2006 at 09:28:52PM -0500, chris clepper wrote:
GEM already has two objects for turning audio into video and back again - pix_sig2pix and pix_pix2sig. How is this proposal different from what those objects do already? Would changing the block~ object and using the pix_ objects implement the system as proposed?
cgc
On 5/27/06, Miller Puckette <mpuckett@imusic1.ucsd.edu > wrote:
To Pd developers,
After years of worrying about it, I'm thinking it's time to add video support natively to Pd. My basic idea is to add a feature to the block~ object so that windows of DSP computation can be triggered from external messages. That way, video I/O objects could be designed to spit out frames or portions of frames on demand, and the user gets the ability to explicitly schedule how the video computations should be run.
On Sat, 3 Jun 2006, Miller Puckette wrote:
By the way, has anyone ever fixed pix_texture so that it can take non-power-of-two image sizes?
Yes, it's been fixed a few years ago.
Has anyone fixed pd so that it takes non-power-of-two block sizes?
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
That's the main thing I'm thinking about doing now. Sounds like my video trick might come down to 50 lines of code now... :)
M
On Sun, Jun 04, 2006 at 01:12:32AM -0400, Mathieu Bouchard wrote:
On Sat, 3 Jun 2006, Miller Puckette wrote:
By the way, has anyone ever fixed pix_texture so that it can take non-power-of-two image sizes?
Yes, it's been fixed a few years ago.
Has anyone fixed pd so that it takes non-power-of-two block sizes?
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - t?l:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montr?al QC Canada
PD-dev mailing list PD-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
On 6/3/06, Miller Puckette mpuckett@imusic1.ucsd.edu wrote:
excellent question... it might prove that all I'd need, besides the changes to block~, would be to write a larger suite of objects based on pix_sig2pix and pix_pix2sig (for a wide variety of access styles and interpolations).
One of the more time consuming (and potentially frustrating) aspects of video systems is the dealing with the various APIs for reading and writing frames. Using the existing libraries to handle those chores would save some redundant effort.
Can you go into more detail about the revisions to block~? My rough calculations put a 720x480 YCbCr image at about 64 times the data of a single 96khz audio signal. RGBA is double that.
By the way, has anyone ever fixed pix_texture so that it can take
non-power-of-two image sizes? (I assume it's still true that pix_texture is the fastest way to display an image - if that's still true, it would be important to support at least arbitrary rectangles...)
Rectangle textures work on Linux, Mac and Windows now. Right now all three handle RGBA, but only OSX has native support for YCbCr (the other two require software conversion to RGBA before upload).
Hmm, revisions to block~.
I think it would suffice to offer a flag, as in "nlock -manualsync 234 ..." that would decare the local block size to, say, 234 samples and require that one ban the block~ object to run one block of 'audio' computation I'd still require that subblocks that aren't "-manualsync" be a power of two multiple or quotient of the parent to make the inlet~/outlet~ code manageable, but if "-manualsync" were on I'd simply disable inlet~/outlet~ as undefined behavior.
One thing I'll be concerned about eventually is getting video input into a storage buffer without the need for an intermediary. I don't know whether Gem's storage buffers (as in pix_multiimage) are easily enough visible to C code to work in this way. In other words, if pix_multiimage just instructs openGL to read a file into its own buffer somewhere, then it's not easily available for me to write random-access interpolating "tabread4~" style objects that use it.
cheers Miller
On Sun, Jun 04, 2006 at 01:23:36AM -0500, chris clepper wrote:
On 6/3/06, Miller Puckette mpuckett@imusic1.ucsd.edu wrote:
excellent question... it might prove that all I'd need, besides the changes to block~, would be to write a larger suite of objects based on pix_sig2pix and pix_pix2sig (for a wide variety of access styles and interpolations).
One of the more time consuming (and potentially frustrating) aspects of video systems is the dealing with the various APIs for reading and writing frames. Using the existing libraries to handle those chores would save some redundant effort.
Can you go into more detail about the revisions to block~? My rough calculations put a 720x480 YCbCr image at about 64 times the data of a single 96khz audio signal. RGBA is double that.
By the way, has anyone ever fixed pix_texture so that it can take
non-power-of-two image sizes? (I assume it's still true that pix_texture is the fastest way to display an image - if that's still true, it would be important to support at least arbitrary rectangles...)
Rectangle textures work on Linux, Mac and Windows now. Right now all three handle RGBA, but only OSX has native support for YCbCr (the other two require software conversion to RGBA before upload).
PD-dev mailing list PD-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
hi miller,
Hmm, revisions to block~.
if your're looking into block~ again, the source for this bug might be somewhere in that area: http://lists.puredata.info/pipermail/pd-dev/2004-08/002682.html
tim
-- TimBlechmann@gmx.de ICQ: 96771783 http://www.mokabar.tk
Silence is only frightening to people who are compulsively verbalizing. William S. Burroughs
On Sun, 4 Jun 2006, Miller Puckette wrote:
One thing I'll be concerned about eventually is getting video input into a storage buffer without the need for an intermediary. I don't know whether Gem's storage buffers (as in pix_multiimage) are easily enough visible to C code to work in this way. In other words, if pix_multiimage just instructs openGL to read a file into its own buffer somewhere, then it's not easily available for me to write random-access interpolating "tabread4~" style objects that use it.
pix objects are not related to OpenGL except for [pix_texture].
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
On Sun, 4 Jun 2006, chris clepper wrote:
Can you go into more detail about the revisions to block~? My rough calculations put a 720x480 YCbCr image at about 64 times the data of a single 96khz audio signal. RGBA is double that.
How do you compute that?
DV-NTSC is 720 px * 480 px * 30 Hz * 2 samples/px^2 = 20736000 samples/s DV-PAL is 720 px * 576 px * 25 Hz * 2 samples/px^2 = 20736000 samples/s 96 kHz mono is 96000 Hz * 1 samples = 96000 samples/s
and 20736000/96000 = exactly 216 = 6*6*6 but closest powers of 2 are 2^7 = 128 and 2^8 = 256. (comparing with stereo sound doesn't change the relative closeness of powers of two)
add to this that I want to use the framesizes I want, the screen aspect ratios I want, and the framerates I want, and I think that pretty much everyone wants that.
PS: I don't understand what you say: if you really want to compare one single frame of video with sound, you have to say the duration of the sound that you compare it with. If the duration is dictated by standard video frame duration standards, you'll get to the same figures that I have gotten to above.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
On 6/4/06, Mathieu Bouchard matju@artengine.ca wrote:
On Sun, 4 Jun 2006, chris clepper wrote:
Can you go into more detail about the revisions to block~? My rough calculations put a 720x480 YCbCr image at about 64 times the data of a single 96khz audio signal. RGBA is double that.
How do you compute that?
DV-NTSC is 720 px * 480 px * 30 Hz * 2 samples/px^2 = 20736000 samples/s DV-PAL is 720 px * 576 px * 25 Hz * 2 samples/px^2 = 20736000 samples/s 96 kHz mono is 96000 Hz * 1 samples = 96000 samples/s
and 20736000/96000 = exactly 216 = 6*6*6 but closest powers of 2 are 2^7 = 128 and 2^8 = 256. (comparing with stereo sound doesn't change the relative closeness of powers of two)
I forgot to make the video signal 4 bytes per channel in my calculation. If you compare the 4 byte audio to single byte per channel video then it is 54 times the data. 64 is the next power of two.
Your numbers are the right ones to use.
Miller Puckette wrote:
To Pd developers,
[...]
basically i very much like the idea with the arguments hans has already presented (e.g. since there are already integer based pixel processing libraries, it would be far more interesting to have this "new" system be floating point based, even if this is not yet(!) promising with respect to cpu power...times will come when everything is faster....)
what is far more interesting to me right now is the fact, that the "dsp on demand" seems to be the way to get non-realtime audio processing into pd (rendering a soundfile as fast as possible).
that we could do video with it, is merely an added bonus.
as for the format: imo, the data itself would need to be floating point (if we don't want to re-code each and every ~-object, which would make the whole idea pointless); i also think the data should be 1 channel per signal-stream. data-storage could be handled by a number of specialized object families (like the iem16 library which stores (highly inefficient!) signals in 16bit tables/delay-lines). (e.g.) tables could just be an abstract interface to storage of "any" type (scalars, symbols, floats, integers, morzels); if this is done properly it would also fix the 64bit problems we currently have (so its a 2-for-1 bargain)
fmga.sdr IOhannes