We're trying to implement various sync options in an iOS libPd-based app, and have run across a noticeable drift in the timing. The app uses 8 'ticks' of 64 samples for faster devices, and 16 ticks for slower ones. Basically what this means, is that messages are only being processed on these larger multi-block boundaries of 512 or 1024 samples. And that's not good enough for keeping timing tight.
Of course, the first question is: Has anyone already made a workaround to this issue???
Or does anyone know of a way to make sure libPd processes messages with proper timing, even with the 'ticks' value set quite high?
I have an idea for adding time delay messages to clock signals...so that they can be sent through a [pipe] in pd to arrive at the correct time during the block processing. But it looks quite tricky to implement, and i'll wait to see if there is maybe a better solution first.
cheers, Matt
Hello!
On 11/11/15 09:18, i go bananas wrote:
We're trying to implement various sync options in an iOS libPd-based app, and have run across a noticeable drift in the timing.
From Oliver's previous email I guess you're also trying to integrate Abelton's "link sync" with your libpd based apps.
https://www.ableton.com/en/link/
The app uses 8 'ticks' of 64 samples for faster devices, and 16 ticks for slower ones. Basically what this means, is that messages are only being processed on these larger multi-block boundaries of 512 or 1024 samples. And that's not good enough for keeping timing tight.
Of course, the first question is: Has anyone already made a workaround to this issue???
You can't really work around physics.
Or does anyone know of a way to make sure libPd processes messages with proper timing, even with the 'ticks' value set quite high?
I have an idea for adding time delay messages to clock signals...so that they can be sent through a [pipe] in pd to arrive at the correct time during the block processing. But it looks quite tricky to implement, and i'll wait to see if there is maybe a better solution first.
I don't think that will work.
From your and Oliver's description (multiple blocks processed at once) you seem to be using libpd's PROCESS macro under the hood.
https://github.com/libpd/libpd/blob/master/libpd_wrapper/z_libpd.c#L164
Here is a diagram of what is happening (time goes from left to right). Hopefully I have this right:
a: >XX|XX|X_|__|*_|__|__|__>
b: >XX|XX|XX|XX|*X|XX|X_|__>
each block boundary would occur.
8-ticks of one block each would take.
8 ticks in the next set.
might come in.
than sign.
As you can see in (a) sometimes the number crunching for the entire 8 ticks is done well before the final greater than sign. Other times, as in (b) it only just completes just in time for the audio to come out of the speaker, which is presumably why you need to process 8 blocks at once. If a sync event arrives, what is libpd and Pd supposed to do? It has to wait until the next block of 8 ticks to process the event.
What you propose (using a pipe internally) only delays it within the following block so the sync would be even worse.
Your only option is to use less ticks, if you can afford the constraints on CPU. In general it's a good strategy to be conservative with regards to CPU when doing audio on mobile devices. That might let you get away with less ticks.
Hopefully I have this right. If I don't I am sure one of the many people on this list smarter than me will correct this. :)
This is all a bit confusing to me because my experience with iOS is that one is able to do 20ms block delay/latency quite comfortably for anything I have thrown at it.
By the way if anybody is interested in a local-area-network based music syncing solution over UDP that is Free Software and unencumbered by proprietary licenses, closed SDK, etc. here is a link to the SyncJams repository. :)
https://github.com/chr15m/SyncJams
One thing I am super excited about is I heard a rumor that Katja is working on a Raspberry Pi based alternative to running Pd on iOS touch devices which I imagine should have even lower latency - great for building effects units and the like with Pd. I for one welcome our new apple crushing raspberries.
Cheers,
Chris.
Hi Chris, and thanks for your input. Yeah this is for that Link stuff...
Thanks for the explanations and diagrams, and it's pretty much as i guessed.
But i don't think you properly see what my workaround does.
See, the problem is not with latency, per say...because the Link framework has that covered...so we can have quite a bit of latency, and as far as i know, all devices just sync to the slowest one.
The problem is, that messages coming into libPd seem to only be processed on these multi-block boundaries. Even with just 10ms or 20ms of samples per process loop, that's still a lot of drift for something like a drum machine sending sync every 120ms or whatever.
So to go back to your diagram:
a: >XX|XX|X_|__|*_|__|__|__>
b: >XX|XX|XX|XX|*X|XX|X_|__>
in this case, i would assume that the * message sent to (a) is processed at the beginning of the next multi-block.
BUT, i would also assume that the * message sent to (b) is also processed at the beginning of the next multi-block - because: messages are only converted to audio on block boundaries. In this case, because you are processing 8 blocks at once, then messages will only be processed at the beginning of the next 8 blocks, regardless of where they appear, or how far the processing has proceeded. I just can't see that it could be any different.
So, what you get, is this system where messages are only processed in the audio thread every time that multi-block boundary is reached....just as messages in pd itself are just added to the audio thread on block block boundaries.
BUT....there is a workaround in pd. You can use the scheduler, via [delay], [pipe], [metro] to make sure that messages are triggered in the audio thread between blocks. It's that type of behaviour that i want to leverage to make sure that clock messages are delayed the correct amount in the next block so that we don't get this wonky sync. Yes, i understand that means that we have to have added latency, but as you said Chris, "You can't really work around physics"
On Wed, Nov 11, 2015 at 11:42 AM, Chris McCormick chris@mccormick.cx wrote:
Hello!
On 11/11/15 09:18, i go bananas wrote:
We're trying to implement various sync options in an iOS libPd-based app, and have run across a noticeable drift in the timing.
From Oliver's previous email I guess you're also trying to integrate Abelton's "link sync" with your libpd based apps.
https://www.ableton.com/en/link/
The app uses
8 'ticks' of 64 samples for faster devices, and 16 ticks for slower ones. Basically what this means, is that messages are only being processed on these larger multi-block boundaries of 512 or 1024 samples. And that's not good enough for keeping timing tight.
Of course, the first question is: Has anyone already made a workaround to this issue???
You can't really work around physics.
Or does anyone know of a way to make sure libPd processes messages with
proper timing, even with the 'ticks' value set quite high?
I have an idea for adding time delay messages to clock signals...so that they can be sent through a [pipe] in pd to arrive at the correct time during the block processing. But it looks quite tricky to implement, and i'll wait to see if there is maybe a better solution first.
I don't think that will work.
From your and Oliver's description (multiple blocks processed at once) you seem to be using libpd's PROCESS macro under the hood.
https://github.com/libpd/libpd/blob/master/libpd_wrapper/z_libpd.c#L164
Here is a diagram of what is happening (time goes from left to right). Hopefully I have this right:
a: >XX|XX|X_|__|*_|__|__|__>
b: >XX|XX|XX|XX|*X|XX|X_|__>
- The time between the greater than pipe symbols is the real time that
each block boundary would occur.
- The time between the greater than signs is the real time that your
8-ticks of one block each would take.
- The X's are Pd crunching the numbers to produce output audio for all 8
ticks in the next set.
- The asterisk symbol is a MIDI or other network based event that might
come in.
- You don't hear the audio produced by the X's until the final greater
than sign.
As you can see in (a) sometimes the number crunching for the entire 8 ticks is done well before the final greater than sign. Other times, as in (b) it only just completes just in time for the audio to come out of the speaker, which is presumably why you need to process 8 blocks at once. If a sync event arrives, what is libpd and Pd supposed to do? It has to wait until the next block of 8 ticks to process the event.
What you propose (using a pipe internally) only delays it within the following block so the sync would be even worse.
Your only option is to use less ticks, if you can afford the constraints on CPU. In general it's a good strategy to be conservative with regards to CPU when doing audio on mobile devices. That might let you get away with less ticks.
Hopefully I have this right. If I don't I am sure one of the many people on this list smarter than me will correct this. :)
This is all a bit confusing to me because my experience with iOS is that one is able to do 20ms block delay/latency quite comfortably for anything I have thrown at it.
By the way if anybody is interested in a local-area-network based music syncing solution over UDP that is Free Software and unencumbered by proprietary licenses, closed SDK, etc. here is a link to the SyncJams repository. :)
https://github.com/chr15m/SyncJams
One thing I am super excited about is I heard a rumor that Katja is working on a Raspberry Pi based alternative to running Pd on iOS touch devices which I imagine should have even lower latency - great for building effects units and the like with Pd. I for one welcome our new apple crushing raspberries.
Cheers,
Chris.
On 11/11/15 14:45, i go bananas wrote:
But i don't think you properly see what my workaround does.
See, the problem is not with latency, per say...because the Link framework has that covered...so we can have quite a bit of latency,
Yes you're right, if they are doing latency compensation then the most important thing is consistent latency. In that case, doing a [pipe] or [delay] hack should hopefully work.
So, what you get, is this system where messages are only processed in the audio thread every time that multi-block boundary is reached....just as messages in pd itself are just added to the audio thread on block block boundaries.
Yep, I think that's right.
SyncJams is implemented in Pd itself so I don't have this issue - everything happens in the correct logical time without [pipe] hacks etc.
and as far as i know, all devices just sync to the slowest one.
The algorithm that SyncJams uses is quite simple:
*earlier* that its own metronome, it immediately jumps to the incoming metronome's phase + value. Higher & earlier is treated as always "more correct".
The effect of this is that all nodes sync to the fastest network ping time that is experienced during a session. So if the network has a general latency of 10ms but some packets sneak through in 3ms then the sync difference will be 3ms. The overall effect is that sync gets tighter over time monte-carlo-asymptotically towards the optimal possible latency given the hardware. It also means there is no master/slave setup and that the metronome that every node is syncing to is the virtual "consensus" metronome that emerges from the network rather than one specific node having the best or most correct clock.
Each node assumes it is not the best node, and thereby the group benefits during consensus.
Cheers,
Chris.
On Thu, 2015-11-12 at 09:56 +0800, Chris McCormick wrote:
The algorithm that SyncJams uses is quite simple:
- All nodes keep their own internal metronome (integer counter).
- All nodes broadcast their metronome counter value every tick.
- If a node ever receives a metronome value that is *higher* and
*earlier* that its own metronome, it immediately jumps to the incoming metronome's phase + value. Higher & earlier is treated as always "more correct".
The effect of this is that all nodes sync to the fastest network ping time that is experienced during a session. So if the network has a general latency of 10ms but some packets sneak through in 3ms then the sync difference will be 3ms. The overall effect is that sync gets tighter over time monte-carlo-asymptotically towards the optimal possible latency given the hardware. It also means there is no master/slave setup and that the metronome that every node is syncing to is the virtual "consensus" metronome that emerges from the network rather than one specific node having the best or most correct clock.
Each node assumes it is not the best node, and thereby the group benefits during consensus.
Cool design! Thanks for sharing the ideas behind it.
Roman
On 2015-11-11 02:18, i go bananas wrote:
We're trying to implement various sync options in an iOS libPd-based app, and have run across a noticeable drift in the timing. The app uses 8 'ticks' of 64 samples for faster devices, and 16 ticks for slower ones. Basically what this means, is that messages are only being processed on these larger multi-block boundaries of 512 or 1024 samples. And that's not good enough for keeping timing tight.
just a very dumb question: why don't you use a different *tick size* (512, 1024) instead of accumulating ticks?
dfmasdr IOhannes