Hallo, marius schebella hat gesagt: // marius schebella wrote:
does this all make sense? smaller blocksizes give you the possibility to handle messages in even shorter time intervals, bigger blocksizes may help to declick for example when you write to arrays. [for some objects blocksize is even more important (fft~, tabsend~).]
I think, it may be easier to explain this from a practical point of view. I'll give it a shot:
What does this mean? All messages *inside* of Pd are handled with (almost) complete accuracy. If you have a bang going through a [del 0.3] and going through [del 0.299] then you can be sure that Pd triggers the 0.299-bang 0.001 msec before the 0.3-bang, regardless of what your blocksize or so is. It would be terrible and lead to lots of nasty errors if you couldn't rely on Pd to schedule events in that fashion.
It would be a complete waste of ressources to update the GUI or poll GUI elements for changes every sample or every 1/44100 seconds. So currently events coming from the GUI are only read once every 64 samples. This interval also is independent from the blocksize! You can check this with attached patch by setting the blocksize to some really big value like 23 seconds and bang the [random 8]: You don't have to wait 23 seconds to get the result. Also with the [timer] object in that patch you can see the quantization of GUI messages to 64 samples.
As you've explained so well, DSP signals in Pd are calculated in blocks of several samples in one go. Normally 64 samples are one such block, but even with a blocksize of only 1 sample it would be tricky to convert messages to signals correctly, because even one sample takes a certain time (1/SR seconds) to compute.
Generally DSP objects calculate a complete block and cannot react to messages in between that time. The messages themselves are scheduled correctly (Axiom 1) it's just that most DSP objects don't listen for messages during their computation time.
Some DSP objects however actually can react quicker than a block: [vline~] and [vsnapshot~] are the prime examples. They use a little trick to do so: While they still calculate a full block in advance like everyone~ else, they "know" beforehand when messages are scheduled to reach them possibly in the middle of such a block and they calculate their sample block with these "future" messages in mind.
This is possible for messages, that are scheduled to be send at some future point in time. For example a [metro] generates this kind of message: When a [metro 500] bangs, it also instructs Pd to bang again after 500ms. [vline~] then can ask Pd: "Are there any messages scheduled for me during the next block?" and because Pd knows about that scheduled [metro]-bang, it can tell [vline~]: "Yes, there is one bang waiting for you 0.526 msec into the next block. Please take this into account!"
The normal [line~] object doesn't ask Pd about such scheduled messages and as such is faster to compute. If you just need to declick some value from a slider, then you can just use [line~] instead of [vline~] because slider events don't happen faster than 64 samples anyway. But if you build a drum machine that is driven by [metro] , you should really use [vline~] to get a drumset, that is not only good enough for acousmatic music, but also good enough for Jazz, as Eric Lyon once put it.
Frank Barknecht _ ______footils.org_ __goto10.org__