Hi everybody, Particularly now with work to re-implement its timing techniques for the web API (Chris McCormick's WebPd) and to embed it as a DSP (and sequencing) engine (libpd), I know there's a lot of interest in how sample-accurate sequencing works in Pd.
It's been discussed on this list before, but I'm not sure that discussion has ever been systematically documented, and changes since mean now is probably an ideal time to revisit the question. This started as a discussion between me and Chris and extended to Eric Lyon and Hans, but Hans pointed out we should be having it on the list.
In short:
sequencer in Pd, what would the best technique be? Keep in mind that actual calculation of the sequences themselves might occur outside the Pd patch.
with the idea of using vline~ to trigger pre-calculated sample events based on its envelopes, which is at least interesting. His assumption was that not only are its ramps interpolated at sub-sample levels, but that the calculations of the delays themselves are sample-accurate -- though that may or may not be correct.
boundaries, and what isn't? (And for that matter, at what point do you think people should care?)
the Pd event engine thusly: "The underlying Pd event scheduler is sub-sample-accurate using 64-bit floating point numbers to represent time, though apparently at the cost of a higher likelihood of interruption of the audio scheduler, resulting in audible glitches. In both systems [Max and Pd] temporal accuracy of control-level events can drift freely within the space of a signal vector."
Is that still true?
Incidentally, I very much like the design of Eric's samm~, mask~, etc.
then the question is whether you want to rely on externals for this kind of timing. I'm convinced by his approach, but for those wanting to work inside vanilla, I'm not sure what the best approach and associated costs may be.
I imagined writing this as a short query, but there you are. Have at it. And since I'm paraphrasing some of my colleagues here, if they want to jump in and correct me, please do...
Thanks,
Peter
PETER KIRN peter@createdigitalmedia.net http://createdigitalmusic.com | PhD Candidate, CUNY Graduate Center | Adjunct faculty, Parsons The New School for Design
Hi,
On Mon, Feb 07, 2011 at 07:13:32PM -0500, Peter Kirn wrote:
Particularly now with work to re-implement its timing techniques for the web API (Chris McCormick's WebPd) and to embed it as a DSP (and sequencing) engine (libpd), I know there's a lot of interest in how sample-accurate sequencing works in Pd.
It's been discussed on this list before, but I'm not sure that discussion has ever been systematically documented, and changes since mean now is probably an ideal time to revisit the question. This started as a discussion between me and Chris and extended to Eric Lyon and Hans, but Hans pointed out we should be having it on the list.
In short:
- If one were working to build a sample-accurate (or close to it)
sequencer in Pd, what would the best technique be? Keep in mind that actual calculation of the sequences themselves might occur outside the Pd patch.
There is a sample and even sub-sample accurate sequencer in Pd: The time of a Pd message event is registered and computed as a 64-bit (?) floating point number, so everything you sequence with messages is just accurate.
- What timing objects in Pd are sample-accurate?
Every timing object in Pd is subsample-accurate. Well, of course it depends, on what you call "timing object", but the usual ones, [delay], [metro], [pipe] are fine.
- (Related though maybe not essential...) What is quantized to block
boundaries, and what isn't? (And for that matter, at what point do you think people should care?)
This is actually an essential point: Where should people care? Simplified a little bit: The message realm of Pd is not quantized to block boundaries. It is a continuum, which is neither quantized nor sampled/discrete. The signal/dsp/audio realm however is (a) sampled (one number every 1/44100 sec) and (b) computed in blocks (e.g. 64 samples).
The message realm is meant to deal with what Miller calls "Control Streams" in his book http://crca.ucsd.edu/~msp/techniques/latest/book-html/node43.html. The problem with sample-accuracy in Pd occurs at the border between control streams and audio signals. Miller explains the issue very well, so please, everyone into this issue, read his chapter 3 on the topic "Audio and control computations": http://crca.ucsd.edu/~msp/techniques/latest/book-html/node40.html
- Have you looked at Eric Lyon's 2006 research? In it, he described
the Pd event engine thusly: "The underlying Pd event scheduler is sub-sample-accurate using 64-bit floating point numbers to represent time, though apparently at the cost of a higher likelihood of interruption of the audio scheduler, resulting in audible glitches. In both systems [Max and Pd] temporal accuracy of control-level events can drift freely within the space of a signal vector."
Is that still true?
Personally I think, it wasn't fully true in 2006 either. (I was on the review board for LAC2006, where the paper was presented, so Eric and I already discussed the issue a bit.) I'm not sure where the representation of events incurs glitches the audio scheduler, and the "temporal accuracy" of events is actually well-defined in Pd objects (although there are "Three ways to change a control stream into an audio signal", cf. MSP:node43.html, Figure 3.4)
Incidentally, I very much like the design of Eric's samm~, mask~, etc.
- to the point of considering a similar scheme for abstractions -- but
then the question is whether you want to rely on externals for this kind of timing. I'm convinced by his approach, but for those wanting to work inside vanilla, I'm not sure what the best approach and associated costs may be.
Eric's approach works by moving control-stream-events into the audio signal realm, apparently avoiding the issues when converting between control streams and audio signals. Problem is: You still have to convert between continuous control streams (metros, scores, etc.), only now its happening inside your objects, hidden from the user under the hood behind your back.
The samm~ objects were born on Max/MSP, where control streams are (were?) not sub-sample accurate, but quantized instead and full of jitter (the bad kind). Reimplementing control streams in a more accurate way made sense there, but is superfluous on Pd, where events in control streams already are sample accurate!
Frank Barknecht Do You RjDj.me? _ ______footils.org__
On Tue, 8 Feb 2011, Frank Barknecht wrote:
This is actually an essential point: Where should people care? Simplified a little bit: The message realm of Pd is not quantized to block boundaries. It is a continuum, which is neither quantized nor sampled/discrete.
That's a little bit way too much simplified. When indexing into "big" tables, the float32 format is quite often a lot too quantised, which is why pd 0.42 introduced a new feature for making the index relative to another index. This has also been added to ZG.
Those big tables aren't particularly big, relatively speaking, because people often want tables that are that big.
The problem with sample-accuracy in Pd occurs at the border between control streams and audio signals.
It also happens within the audio signal realm, as even when you control everything with signals, you can't go below a one-block delay in [delread~], and thus you have to lower blocksize or change strategies completely, when you need to make events tighter than that.
I mean, suppose that I sequence a click through a recursive delay line, for Karplus-Strong synthesis... How accurately-timed the recursively-processed clicks can be ?
Personally I think, it wasn't fully true in 2006 either. (I was on the review board for LAC2006, where the paper was presented, so Eric and I already discussed the issue a bit.) I'm not sure where the representation of events incurs glitches the audio scheduler,
It's just the possibility of making a lot of message-stuff happen between two block computations... Pretty much anything involving GEM pixes or GF grids of comparable size will take a lot of time in the CPU in the main thread. (For PDP, it may depend on whether threading is enabled). It's not specific to video, and it can happen for large matrices in iemmatrix, large networks in pmpd/msd, and large grids in GF that happen not to contain images. It's just a matter of duration of computation.
Basically, it's a feature : Pd allows to use the spare time between the audio blocks to do whatever you want. Thus it allows you to go over the time limit between blocks and this causes dropouts.
But yes, it would be a good idea to allow threading of the audio... in a way possibly similar to what MAX does, but I don't really know what MAX does with that. Maybe it's already being done (I don't know what can be done with the -schedlib option).
| Mathieu Bouchard ---- tél: +1.514.383.3801 ---- Villeray, Montréal, QC