Hi,
I get some strange behaviour with the attached patch which is basically:
[gemhead] | | | [0 0 0 1 1 0 0 1 0 1 0 1 0 0 1 1 ( | | [pix_set] | | | \ // numbox attached to the second i.e. "common" inlet | | [pix_gain 1] | [pix_texture] | [rectangle]
updated until I re-send the pixel color values to pix_set.
(which means the object is recreated), the pix_set seems to be reset as well: the rectangle resets to white, as if the pix_set objet was recreated. I have to re-send the pixel values to pix_set. After doing this, the pixel colors are as expected and the gain is applied correctly
Why doesn't the pix_gain update its output when a new value of the gain is applied?
Why does pix_set reset when I recreate the pix_gain object?
Is all this expected behaviour and I am missing something?
thanks m.
On Mon, 31 Jan 2011, Matteo Sisti Sette wrote:
I get some strange behaviour with the attached patch which is basically:
- When I send a new value to pix_gain's inlet, the output is not
updated until I re-send the pixel color values to pix_set. 2) If I modify the pix_gain object and change its creation argument (which means the object is recreated), the pix_set seems to be reset as well: the rectangle resets to white, as if the pix_set objet was recreated. I have to re-send the pixel values to pix_set. After doing this, the pixel colors are as expected and the gain is applied correctly. Why doesn't the pix_gain update its output when a new value of the gain is applied? Why does pix_set reset when I recreate the pix_gain object? Is all this expected behaviour and I am missing something?
BTW if anyone knows where is the documentation for the principles involved in this, I'd like to know.
In any case, such complications are one good reason to use GridFlow. Seriously, in GridFlow, an image is more similar to a normal message, and the data actually flows from outlet to inlet, whereas in GEM, the gem_state messages are little more than bangs, or more accurately, references to global tables (thus the data doesn't flow much).
| Mathieu Bouchard ---- tél: +1.514.383.3801 ---- Villeray, Montréal, QC
On 01/31/2011 08:55 PM, Mathieu Bouchard wrote:
In any case, such complications are one good reason to use GridFlow.
Well I may be wrong but I am under the impression that such complications are indeed mere gem bugs, rather than being intrinsic in the way GEM is conceived. That is, a pix object is expected to "inform" the objects below it when the pix is modified, and in these cases it is simply failing to do it.
Seriously, in GridFlow, an image is more similar to a normal message, and the data actually flows from outlet to inlet,
Yeah I know, and I LOVE that, ideally.
However by trying it a little bit I got the impression that any real-life image processing of even a minimum of complexity is completely unfeasible in practice because it immediately becomes too slow. Or isn't it so?
Is it possible, for example, to do movement detection and blob detection in gridflow with an efficiency even remotely comparable to that you can achieve with [pix_movement] and [pix_blob]?
thanks m.
On Mon, 31 Jan 2011, Matteo Sisti Sette wrote:
However by trying it a little bit I got the impression that any real-life image processing of even a minimum of complexity is completely unfeasible in practice because it immediately becomes too slow. Or isn't it so? Is it possible, for example, to do movement detection and blob detection in gridflow with an efficiency even remotely comparable to that you can achieve with [pix_movement] and [pix_blob]?
I don't know, I never tried those.
Are you trying it with an image size much larger than what you really need to analyse ? If so, why do you do it ?
The pixel throughput isn't everything... it's what you do with it.
And measuring GridFlow in terms of what GEM allows you to do won't reveal what GridFlow can offer you that GEM can't.
| Mathieu Bouchard ---- tél: +1.514.383.3801 ---- Villeray, Montréal, QC
On 02/27/2011 06:43 PM, Mathieu Bouchard wrote:
On Mon, 31 Jan 2011, Matteo Sisti Sette wrote:
However by trying it a little bit I got the impression that any real-life image processing of even a minimum of complexity is completely unfeasible in practice because it immediately becomes too slow. Or isn't it so? Is it possible, for example, to do movement detection and blob detection in gridflow with an efficiency even remotely comparable to that you can achieve with [pix_movement] and [pix_blob]?
I don't know, I never tried those.
Are you trying it with an image size much larger than what you really need to analyse ?
No I wan't, but I haven't really tried blob detection. I just tried some very basic image processing such as mixing two images, changing the colors, you know the basic stuff you can find in the example patches, and the CPU was already very loaded with relatively small images.
The pixel throughput isn't everything... it's what you do with it.
Of course. I'm sure there are a lot of fascinating things that can be done with low (or not-so-high) pixel throughput.
And by the way I didn't mean to criticize GridFlow in any way. I was just expressing the fact that the power of manipulating raw pixels with matrices in a patching environment such as Pd results to me "frustratingly attractive", where the frustration comes from the fact that you can't achieve enough efficiency to manipulate images of "reasonable size" (where of course reasonable means reasonable to me in a given context, e.g. a 1024x768 image to be projected). But I think the limitation is mostly intrinsic in the fact of doing it in an "interpreted" environment. I mean I don't think gridflow could be much faster than it is, or could it?
And measuring GridFlow in terms of what GEM allows you to do won't reveal what GridFlow can offer you that GEM can't.
Of course, mine was not meant to be a fair comparison ;)
Something that would be great would be a "pd/gridflow-like" patching environment that would compile your patch into shaders and have the GPU make the computation - but in a completely invisible way ....
Do you know if something like that already exists?
On Sun, Feb 27, 2011 at 19:35, Matteo Sisti Sette < matteosistisette@gmail.com> wrote:
No I wan't, but I haven't really tried blob detection. I just tried some very basic image processing such as mixing two images, changing the colors, you know the basic stuff you can find in the example patches, and the CPU was already very loaded with relatively small images.
I was just expressing the fact that the power of manipulating raw pixels with matrices in a patching environment such as Pd results to me "frustratingly attractive", where the frustration comes from the fact that you can't achieve enough efficiency to manipulate images of "reasonable size" (where of course reasonable means reasonable to me in a given context, e.g. a 1024x768 image to be projected). But I think the limitation is mostly intrinsic in the fact of doing it in an "interpreted" environment.
I also feel the same issue, mixing 3 videos together through [gemframebuffer] with alpha effects bring my cpu up to 90% utilization (core2duo 1.66ghz) but I still need a lot of effects and stuff to be added.
If only puredata just used YUV by default internally for _everything_ so at least it would be a bit faster, and I think a lot more work can be done to get a better video performance. Maybe someone on the list has a nice list of semi-universal tips to make video patches run faster?
Or maybe for video puredata is just 'almost' fast enough with computers nowadays.. Can't imagine ever playing with GEM and pix_ stuff on a Pentium 3 for example. What I mean is, that puredata only recently has become a viable option for the stuff that you and I want to do with it?
Something that would be great would be a "pd/gridflow-like" patching environment that would compile your patch into shaders and have the GPU make the computation - but in a completely invisible way ....
Do you know if something like that already exists?
Lol, nice holy grail, just sprinkle some fairydust over the patch and it will run on the GPU ;)
You can run GLSL shaders, but you need the know-how to make them. There are some people busy with making nice collections of premade shader-effects that you can just download and use. Soon i'll have some capable hardware for this and will let you know how it works out.
There is of course the completely seperate project of 'vvvv' which is also a dataflow language like puredata, but ment for video. Sadly this project is windows only afaik.
Soon I'll get some faster computers to develop more patches, so hopefully just throwing a lot more CPU against my problems will make them go away ;)
-- buZz http://puikheid.nl/
On Mon, Feb 28, 2011 at 01:26, Bastiaan van den Berg buzz@spacedout.nlwrote:
I also feel the same issue, mixing 3 videos together through [gemframebuffer] with alpha effects bring my cpu up to 90% utilization (core2duo 1.66ghz) but I still need a lot of effects and stuff to be added.
Oh, just for reference, the patch I used for this : http://etc.servehttp.com/gemframebuffer-is-working.png
On Mon, 28 Feb 2011, Bastiaan van den Berg wrote:
If only puredata just used YUV by default internally for _everything_
PureData itself doesn't support video. Video is provided by one of three frameworks, GEM, PDP and GridFlow.
so at least it would be a bit faster,
YUV is different in each of them. What is called YUV in GridFlow is YUV-444, no macropixels. GEM has YUV-422, PDP has YUV-420. This makes a lot of difference in efficiency and effective resolution.
My position on it, is that chroma often matters a lot, and when it does, then chroma has to have a high resolution. In that case, this means luma has to be set to an excessively high resolution which happens to be waste.
So, when chroma is secondary, it looks like YUV-420 does 50 % savings, YUV-422 does 33 % savings, and YUV-444 does no savings.
But in a chroma-centric application, YUV-444 and RGB do no waste, YUV-420 adds 100 % waste (this means 50 % of the signal is not used), and for YUV-422, it depends whether one can use anamorphic pixel sizes or not.
GF's linux camera input has a mode called 'magic' for downscaling the luma very quickly, in cases where a camera gives you YUV-420 when you actually want YUV-444. This has been my situation.
For ordinary veejay mixing, though, you may use telly-style YUV-422.
and I think a lot more work can be done to get a better video performance. Maybe someone on the list has a nice list of semi-universal tips to make video patches run faster?
Like, if you need a short video loop to be read several times at once, preload it already decompressed in RAM ?
Or maybe for video puredata is just 'almost' fast enough with computers nowadays..
It depends on the decoder libraries being used, how they've been compiled, etc. Those things are fairly external to GEM/PDP/GF, except that those frameworks have to include code to plug with those decoder libraries.
Can't imagine ever playing with GEM and pix_ stuff on a Pentium 3 for example. What I mean is, that puredata only recently has become a viable option for the stuff that you and I want to do with it?
What do I want to do with it ?
There is of course the completely seperate project of 'vvvv' which is also a dataflow language like puredata, but ment for video. Sadly this project is windows only afaik.
It's also non-free : it's a non-commercial license and the source code is not available.
| Mathieu Bouchard ---- tél: +1.514.383.3801 ---- Villeray, Montréal, QC
On Sun, Feb 27, 2011 at 7:26 PM, Bastiaan van den Berg buzz@spacedout.nlwrote:
If only puredata just used YUV by default internally for _everything_ so at least it would be a bit faster, and I think a lot more work can be done to get a better video performance. Maybe someone on the list has a nice list of semi-universal tips to make video patches run faster?
GEM on OSX and pdp on any platform do this.
Or maybe for video puredata is just 'almost' fast enough with computers nowadays.. Can't imagine ever playing with GEM and pix_ stuff on a Pentium 3 for example. What I mean is, that puredata only recently has become a viable option for the stuff that you and I want to do with it?
I started doing 1920x1080 HD work using GEM on OSX in 2006 and never had performance problems. I had the engineers who write Final Cut Pro tell me that what I was doing was physically impossible with modern CPUs, yet it was done. At this point streaming raw video out of a 5D or P2 cam, recording it to disk while manipulating the video using shaders is the baseline for performance, not fantasy.
Sorry for the delay. I almost haven't worked on GridFlow since february.
Here's the reply.
Le 2011-02-27 à 19:35:00, Matteo Sisti Sette a écrit :
On 02/27/2011 06:43 PM, Mathieu Bouchard wrote:
Are you trying it with an image size much larger than what you really need to analyse ?
No I wan't, but I haven't really tried blob detection. I just tried some very basic image processing such as mixing two images, changing the colors, you know the basic stuff you can find in the example patches, and the CPU was already very loaded with relatively small images.
There's a problem with number types... the default number type has a lot more range than what is usually needed, and the other number types aren't so easy to use. If this were dealt with, the average GridFlow experience would be a lot faster. You can see alternate number types in several of the examples. As it is now, each GridFlow grid often takes 2 or 4 times the amount of RAM it needs. It's already optimisable since many years, but so far, you have to learn the extra syntax.
OTOH, the looser ranges means you more naturally avoid clipping your RGB space, so that you don't have to think about it. In GEM, you don't even have the option of bigger ranges (all pixel values go from 0 to 255).
I was just expressing the fact that the power of manipulating raw pixels with matrices in a patching environment such as Pd results to me "frustratingly attractive", where the frustration comes from the fact that you can't achieve enough efficiency to manipulate images of "reasonable size"
Perhaps threaded IO would help with those things : if most [#in] and [#out] plugins used threads, they could use the 2nd CPU that most people have and it would already be a relief.
But I think the limitation is mostly intrinsic in the fact of doing it in an "interpreted" environment.
Much of GridFlow is designed to be quite fast in an interpreted environment, by doing lots of work per message so that you don't need to send many messages, but it still is quite inefficient on certain things such as copying too much RAM. Much of this has to do with GridFlow not ever requiring something like [pix_separator] (the [#t] is not an equivalent of [pix_separator]).
I mean I don't think gridflow could be much faster than it is, or could it?
It probably could be much faster, yes. I just stated 3 ways in which it could.
Something that would be great would be a "pd/gridflow-like" patching environment that would compile your patch into shaders and have the GPU make the computation - but in a completely invisible way .... Do you know if something like that already exists?
There's Quartz Composer, but I wouldn't use that.
Also, GPU has some quite harsh limitations. There are things that are hard to do outside of the CPU, and generally, even for things that are doable on a GPU, so much code would have to be half-rewritten to fit on the GPU, that it takes a lot of effort and never will be fully automatic (as long as we're in the current GPU paradigm).
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
Hey, thanks for the reply!
On 11/22/2011 11:54 PM, Mathieu Bouchard wrote:
There's a problem with number types... the default number type has a lot more range than what is usually needed, and the other number types aren't so easy to use. If this were dealt with, the average GridFlow experience would be a lot faster.
Does that really have an impact on speed, not only memory usage?
Much of GridFlow is designed to be quite fast in an interpreted environment, by doing lots of work per message so that you don't need to send many messages, but it still is quite inefficient on certain things such as copying too much RAM.
I am curious about this in a general and OT way, because I've seen that happen in other interpreted environments and that sounds a lot counterintuitive to me (such in Processing, where the bottleneck is often in the methods that copy all the image pixels):
how comes that in those cases copying large amounts of memory is more of a bottleneck than actually doing computations?
Just a curiosity, and I understand that the question _may_ be badly formulated...
Le 2011-11-23 à 00:16:00, Matteo Sisti Sette a écrit :
Hey, thanks for the reply! On 11/22/2011 11:54 PM, Mathieu Bouchard wrote:
There's a problem with number types... the default number type has a lot more range than what is usually needed, and the other number types aren't so easy to use. If this were dealt with, the average GridFlow experience would be a lot faster.
Does that really have an impact on speed, not only memory usage?
Both, but the speed ratio is often not as bad as the memory ratio... it depends.
I am curious about this in a general and OT way, because I've seen that happen in other interpreted environments and that sounds a lot counterintuitive to me (such in Processing, where the bottleneck is often in the methods that copy all the image pixels): how comes that in those cases copying large amounts of memory is more of a bottleneck than actually doing computations?
It depends on whether an algorithm really needs to write a copy of the image because it needs to keep reading the original image until the work is done.
It depends on whether it is assumed that the user wants to keep a copy of the original data to do something with it (or perhaps the algorithm has to assume that the original data _might_ have to be read).
It depends on whether the algorithm has to modify only a portion of a whole copy of an image.
Etc.
But usually, the problem is not that the memory copying takes longer, it might just be that it takes a too big percentage compared to other tasks.
There's also the problem that making copies takes more active RAM, which means that the SRAM has to be swapping, which means that the actual CPU work of copying is slowed down by having to use the DRAM. When you have something like 2 gigs of RAM, these days, it's DRAM, whereas the SRAM is a much faster memory put closer to the CPU, and which is only a few megs. There might also be several levels of SRAM with different speeds and sizes.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
On 11/23/2011 12:28 AM, Mathieu Bouchard wrote:
But usually, the problem is not that the memory copying takes longer, it might just be that it takes a too big percentage compared to other tasks.
There's also the problem that making copies takes more active RAM, which means that the SRAM has to be swapping,...
But do any of these factors change when using an interpreted language or environment as opposed to doing this "natively" (e.g. in C++)?
I mean, when the bottlenecks of copying ram are discussed, I sometimes get the impression that I'm being told: this is the part of code where the overhead of doing things in java (or whatever) rather than c++ is biggest, which is what I find counterintuitive. Or is it just a misunderstanding of mine?
Sorry if this is too much OT but since you kind of mentioned it you woke up an existing curiosity of mine and I ask because it seems you know about it ;)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 2011-11-23 01:11, Matteo Sisti Sette wrote:
But do any of these factors change when using an interpreted language or environment as opposed to doing this "natively" (e.g. in C++)?
out of interest: which interpreted language do you intend to use for image processing?
fgmasdr IOhannes
Le 2011-11-23 à 01:11:00, Matteo Sisti Sette a écrit :
But do any of these factors change when using an interpreted language or environment as opposed to doing this "natively" (e.g. in C++)?
It depends on how much the interpreted language is actually compiled, and how it interacts with « less compiled » parts.
In Pd, nearly every piece of external or internal class is written in C or C++, and all abstractions are written in an interpreted language named Pd. Some other externals are written in other languages (Tcl, Lua, Python, etc., and formerly I was using Ruby).
This means that some parts are fast and some parts are slow. Now, if you give to a C/C++ part a large piece of work at a time, you're using much less CPU than if you cut it into tiny pieces. That's one big difference between using, say, [list-drip] vs [foreach], but it's even more the case if you do many [+] (without [list-map]) vs one big [# +].
([list-map] is actually much slower than what it is possible to do as a plain abstraction without deps, so that's why I say without [list-map])
Pd itself is probably among the slowest interpreted languages when you look at the message system. The interpreter still preparses everything and objects are mostly connected to each other as a graph. Symbol-table-lookup is used fairly seldom, and that helps making it not so slow. Using a rule of thumb, Pd should be faster than languages that reparse everything all of the time, such as Bash, and very old versions of Tcl until version 8 (which came out in 1997).
Pd's DSP is faster. It involves processing data in larger chunks of 64 floats by default (see above about too many tiny pieces) and it compiles patches as «wordcode», which is similar in speed to bytecode (such as Perl/Python), and usually somewhat faster than object graphs (such as Pd's message system and Ruby).
Then Java... Java is somewhat special. The oldest versions used plain bytecode (as in the original versions of Smalltalk), but when doing so, it was often slower than Tcl8/Perl/Python, because it interpreted each character operation separately, whereas Tcl8/Perl/Python bytecodes work on whole strings at once. It's again the problem of too many tiny pieces.
However, Java is nowadays almost always used with the JNI, which is a model it got from the SELF language. It's actually nearly as old as Java bytecode. Improvements in JNI made Java come supposedly close to the speed of C++, though there are still other ways in which Java needs more resources than C++.
I mean, when the bottlenecks of copying ram are discussed, I sometimes get the impression that I'm being told: this is the part of code where the overhead of doing things in java (or whatever) rather than c++ is biggest, which is what I find counterintuitive. Or is it just a misunderstanding of mine?
I don't know how fast Java compilers are supposed to be right now. I have never tried serious number-crunching in Java. All I can tell you is to find a benchmark. Results will vary depending on the task being performed, which compiler/runtime-env is being used, and lots of small details in how each programme is written in each language.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
----- Original Message -----
From: Mathieu Bouchard matju@artengine.ca To: Matteo Sisti Sette matteosistisette@gmail.com Cc: PD-List pd-list@iem.at; gridflow-dev@artengine.ca Sent: Thursday, November 24, 2011 12:33 PM Subject: Re: [PD] GridFlow slowness
Le 2011-11-23 à 01:11:00, Matteo Sisti Sette a écrit :
But do any of these factors change when using an interpreted language or
environment as opposed to doing this "natively" (e.g. in C++)?
It depends on how much the interpreted language is actually compiled, and how it interacts with « less compiled » parts.
In Pd, nearly every piece of external or internal class is written in C or C++, and all abstractions are written in an interpreted language named Pd. Some other externals are written in other languages (Tcl, Lua, Python, etc., and formerly I was using Ruby).
Is there a way to take a pd patch and compile it to c or c++ or something?
This means that some parts are fast and some parts are slow. Now, if you give to a C/C++ part a large piece of work at a time, you're using much less CPU than if you cut it into tiny pieces. That's one big difference between using, say, [list-drip] vs [foreach], but it's even more the case if you do many [+] (without [list-map]) vs one big [# +].
([list-map] is actually much slower than what it is possible to do as a plain abstraction without deps, so that's why I say without [list-map])
Pd itself is probably among the slowest interpreted languages when you look at the message system. The interpreter still preparses everything and objects are mostly connected to each other as a graph. Symbol-table-lookup is used fairly seldom, and that helps making it not so slow. Using a rule of thumb, Pd should be faster than languages that reparse everything all of the time, such as Bash, and very old versions of Tcl until version 8 (which came out in 1997).
Pd's DSP is faster. It involves processing data in larger chunks of 64 floats by default (see above about too many tiny pieces) and it compiles patches as «wordcode»,
What is wordcode? Is that what's happening in d_ugen.c?
which is similar in speed to bytecode (such as Perl/Python), and usually somewhat faster than object graphs (such as Pd's message system and Ruby).
Then Java... Java is somewhat special. The oldest versions used plain bytecode (as in the original versions of Smalltalk), but when doing so, it was often slower than Tcl8/Perl/Python, because it interpreted each character operation separately, whereas Tcl8/Perl/Python bytecodes work on whole strings at once. It's again the problem of too many tiny pieces.
However, Java is nowadays almost always used with the JNI, which is a model it got from the SELF language. It's actually nearly as old as Java bytecode. Improvements in JNI made Java come supposedly close to the speed of C++, though there are still other ways in which Java needs more resources than C++.
I mean, when the bottlenecks of copying ram are discussed, I sometimes get
the impression that I'm being told: this is the part of code where the overhead of doing things in java (or whatever) rather than c++ is biggest, which is what I find counterintuitive. Or is it just a misunderstanding of mine?
I don't know how fast Java compilers are supposed to be right now. I have never tried serious number-crunching in Java. All I can tell you is to find a benchmark. Results will vary depending on the task being performed, which compiler/runtime-env is being used, and lots of small details in how each programme is written in each language.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC _______________________________________________ Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On 11/24/2011 07:04 PM, Jonathan Wilkes wrote:
Is there a way to take a pd patch and compile it to c or c++ or something?
i remember a poster presentation at nime 2008 about a Pd-to-C compiler. only built-ins could be used (no externals), and i cannot remember whether it was possible to use abstractions.
they claimed lots of speedup, but that might have been only when it comes to plain DSP.
i never tried it myself, though.
gfmasdr IOhannes
Le 2011-11-24 à 10:04:00, Jonathan Wilkes a écrit :
Is there a way to take a pd patch and compile it to c or c++ or something?
There are ways to do various kinds of compilations of pd patches, and at least one has been tried, but the source of C-based classes is not designed to be inlined into a compilation output, therefore much work has to be redone by a compiler if you want to make more optimisation. (It's not like I really have a solution in the back of my head for this.)
Pd's DSP is faster. It involves processing data in larger chunks of 64 floats by default (see above about too many tiny pieces) and it compiles patches as «wordcode»,
What is wordcode? Is that what's happening in d_ugen.c?
The term «wordcode» isn't nearly as widespread as the word «bytecode» is, and essentially, they're much of the same strategy, but the stereotype for bytecode is that the code is run by looping through a char[], whereas for wordcode, you're looping through a void*[] or something like that. In pd, t_word is a type which has the same size as a C pointer.
d_ugen.c is building sequences of pointer-sized variables. Each instruction is one function pointer followed by any number of items. Each instruction is provided by a dsp_add call made in each dsp-function of each tilde-external. The function pointer points to what is called a perform-function. Each perform-function is supposed to know how long the instruction is because it has to return the pointer to the next instruction (return w+5; and such).
[#expr] also uses some kind of word code. Each instruction is a t_atom, which is a pair of pointer-sized variables (with a lot of slack in them). [#expr] 9.12 defines several custom atom-types and [#expr] 9.14 defines two or three more for optimisation. I expect that a later version might scrap this and switch to some other similar scheme using t_word or void* instead of t_atom... if it is to become faster than [expr].
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
Have a look at this patch... ++
Jack
Le lundi 31 janvier 2011 à 16:53 +0100, Matteo Sisti Sette a écrit :
Hi,
I get some strange behaviour with the attached patch which is basically:
[gemhead] | | | [0 0 0 1 1 0 0 1 0 1 0 1 0 0 1 1 ( | | [pix_set] | | | \ // numbox attached to the second i.e. "common" inlet | | [pix_gain 1] | [pix_texture] | [rectangle]
- When I send a new value to pix_gain's inlet, the output is not
updated until I re-send the pixel color values to pix_set.
- If I modify the pix_gain object and change its creation argument
(which means the object is recreated), the pix_set seems to be reset as well: the rectangle resets to white, as if the pix_set objet was recreated. I have to re-send the pixel values to pix_set. After doing this, the pixel colors are as expected and the gain is applied correctly
Why doesn't the pix_gain update its output when a new value of the gain is applied?
Why does pix_set reset when I recreate the pix_gain object?
Is all this expected behaviour and I am missing something?
thanks m. _______________________________________________ Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On 01/31/2011 10:29 PM, Jack wrote:
Have a look at this patch...
Oh!
Why is the pix_separator needed?
Does this imply extra computation, that is, does this force every new frame to be processed even when neither pix_set nor the gain are changed?
Is this a workaround, or is it really expected that you need a separator in this case? Because if so, I don't understand it.
Thanks m.
It seems [pix_set] output once its 'contents' when a list is sent on its cold inlet. So you need something like [pix_separator ] or [pix_buf] to store the 'contents' of the [pix_set] in a buffer and output this 'contents' each frame. This is a sort of workaround. ++
Jack
Le lundi 31 janvier 2011 à 22:40 +0100, Matteo Sisti Sette a écrit :
On 01/31/2011 10:29 PM, Jack wrote:
Have a look at this patch...
Oh!
Why is the pix_separator needed?
Does this imply extra computation, that is, does this force every new frame to be processed even when neither pix_set nor the gain are changed?
Is this a workaround, or is it really expected that you need a separator in this case? Because if so, I don't understand it.
Thanks m.