Le 2011-10-15 à 17:21:00, glerm soares a écrit :
& sorry about the naive question - but can someone here try to explain shortly what this line does? This ">>" means some kind of bitshifts ?
Yeah... but to be clear, it does convert to plain int (truncating), shifts the integer, and converts back to float, which is both slow and often less useful than ldexp. But there *is* a real use for a truncating bitshifter in pd.
About this, I have an anecdote : I went in a big IRC channel on FreeNode about C or C++, asking for a float equivalent of the << and >> operators. At least five people began by assuming that I wanted something weird that has no possible use, and they started flaming me for it (real angry). In the end, when I got them to figure out what a useful equivalent of << and
could be, they didn't know. Then I probably scanned in <math.h> for all
the functions I didn't know and finally found ldexp. I think that in pd, it's only available in [expr] and in GF.
so, the difference is that 31>>3 == 3 whereas ldexp(31,3)==3.875, with fractionary bits kept.
BTW, in floats, when bitshifting, the bits don't actually shift, and instead, the exponent field increases or decreases, because that's how the float format is multi-scale : it has a builtin concept of << and >> at its core (even though hardly anyone ever knows what ldexp is !)
I think some curious question could be - if there is a an [>>~] operator in pd what it does? Moves a bit for the next cycle of the of the sample block ?
What happens with nearly all simple math operators that have a class in pd for floats, and another similarly-named class for signals, is that the latter does the job of the former on every float that is found inside each block. This happens without interactions between floats of different instants.
So, [>>~] does, for each time t inside each block, take input x[t], apply [>>], put into output y[t], that's all. It's all the same pattern as [+] vs [+~], and [*] vs [*~], etc.
The big exception is stuff like [cos~], that has a different scale factor because Miller said so.
Can you indicate some patches to clarify this theory?
No... actually, I don't know much of a use for [>>~] in particular... whereas [expr~ ldexp($v1,...)] might be optimised way(s) to do certain cases of [*~]. I use >> a lot, but never in a signal context.
With people solving control-style problems using signal-style solutions (e.g. Barknecht's experiments), I can see more use of typically non-signal stuff being done with signal anyways, and this would explain the existence of [>>~].
But frankly, I think that the existence of [>>~] is simply for consistency, for completing the pattern of correspondence between float ops and signal ops. In a certain sense, pd is simpler when it is more complete, because there are less exceptions in its design. You know what I mean ?
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC