Hi,
I am trying to record audio with [writesf~] on two computers but have the beginning of these two recordings in sync. I thought of using zexy's [time] object to access the computer's clocks, which are synced to network time. [time] on my Linux O.S gives me milliseconds with three decimal points, which should be microseconds.
I tried polling [time] with a high speed [metro 0.001] and compare that time to a threshold value. But that maxes out cpu usage to 100%.
As a workaround I could somehow encode the start time of [writesf~] into its first sample and use some python script to synchronise these recordings later, comparable to broadcast wave timestamps, which I thing have hms:frame timing resolution.
Is there a more elegant way of achieving this?
Thank you for all ideas! Peter
Hello,
Le 16 oct. 2025 à 08:54, Peter P. peterparker@fastmail.com a écrit :
Hi,
I am trying to record audio with [writesf~] on two computers but have the beginning of these two recordings in sync. I thought of using zexy's [time] object to access the computer's clocks, which are synced to network time.
is it through wifi or ethernet?
Pat'
Hello,
Le 16 oct. 2025 à 08:54, Peter P. peterparker@fastmail.com a écrit :
Hi,
I am trying to record audio with [writesf~] on two computers but have the beginning of these two recordings in sync. I thought of using zexy's [time] object to access the computer's clocks, which are synced to network time.
is it through wifi or ethernet?
Largely through wireless networks, Pat. I think laptop clocks do not constantly need to have network access to ntp but will obviously start to drift.
best, P
On 10/16/25 08:54, Peter P. wrote:
As a workaround I could somehow encode the start time of [writesf~] into its first sample and use some python script to synchronise these recordings later, comparable to broadcast wave timestamps, which I thing have hms:frame timing resolution.
Is there a more elegant way of achieving this?
if you want to have sample-accurate resolution, this is a non-trivial task.
zexy's [time] will give you the current system time, but Pd uses an audio buffer, so the actual system time might will typically not correspond (nor have a constant offset) to an imaginary timestamp attached to a "sample" as it leaves your soundcard.
we did something like synchronous recording on multiple devices in the wilma project a couple of years ago (https://wilma.kug.ac.at).
iirc, it involved special hardware that was synched via radio (not NTP over WiFi) and encoded the wall clock timestamps within a dedicated audio channel.
gfmasdr IOhannes
[...]
if you want to have sample-accurate resolution, this is a non-trivial task.
zexy's [time] will give you the current system time, but Pd uses an audio buffer, so the actual system time might will typically not correspond (nor have a constant offset) to an imaginary timestamp attached to a "sample" as it leaves your soundcard.
Do I understand correctly that one sample from my ADCs will arrive at a random moment within the lengh of one buffer in Pd?
we did something like synchronous recording on multiple devices in the wilma project a couple of years ago (https://wilma.kug.ac.at).
iirc, it involved special hardware that was synched via radio (not NTP over WiFi) and encoded the wall clock timestamps within a dedicated audio channel.
Ah yes, WILMA!
In my case I want to go for the best possible resolution without a dedicated radio clock and with standard laptop hardware. Is banging [time] at every microsecond still the best way maxing my cpu?
Thanks a lot! Peter
Hello,
The easiest low teck solution would be to have a dedicated loudspeaker in front of every microphone, and send an audible "bang" simultaneously. cheers c
Le 16/10/2025 à 10:18, Peter P. a écrit :
- IOhannes m zmoelnig via Pd-list pd-list@lists.iem.at [2025-10-16 10:06]:
[...]
if you want to have sample-accurate resolution, this is a non-trivial task.
zexy's [time] will give you the current system time, but Pd uses an audio buffer, so the actual system time might will typically not correspond (nor have a constant offset) to an imaginary timestamp attached to a "sample" as it leaves your soundcard.
Do I understand correctly that one sample from my ADCs will arrive at a random moment within the lengh of one buffer in Pd?
we did something like synchronous recording on multiple devices in the wilma project a couple of years ago (https://wilma.kug.ac.at).
iirc, it involved special hardware that was synched via radio (not NTP over WiFi) and encoded the wall clock timestamps within a dedicated audio channel.
Ah yes, WILMA!
In my case I want to go for the best possible resolution without a dedicated radio clock and with standard laptop hardware. Is banging [time] at every microsecond still the best way maxing my cpu?
Thanks a lot! Peter
pd-list@lists.iem.at - the Pure Data mailinglist https://lists.iem.at/hyperkitty/list/pd-list@lists.iem.at/message/WREU7AZ5EB...
To unsubscribe send an email to pd-list-leave@lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://lists.iem.at/
Hello,
The easiest low teck solution would be to have a dedicated loudspeaker in front of every microphone, and send an audible "bang" simultaneously.
Thanks Cyrille! Yes, indeed. I am however trying to record at two rather distant locations which would make it difficult to sync the two signals with an audible clap or "bang". Another option I have found is to play a terrestial radio station on two analogue AM/FM radios to both signals. However, I am wondering if with all the digital technology we have, whether there is a (Pd) solution. :)
best, P
On 10/16/25 10:18, Peter P. wrote:
Do I understand correctly that one sample from my ADCs will arrive at a random moment within the lengh of one buffer in Pd?
depending on the API you are using, more or less: yes. (with callbacks, you might get a less arbitrary time - but i haven't actually verified this).
of course the logical time (within Pd) takes care of this fluctuation. it's just that the system time (as reported by [time]) might have different ideas about "now".
In my case I want to go for the best possible resolution without a dedicated radio clock and with standard laptop hardware. Is banging [time] at every microsecond still the best way maxing my cpu?
there's really no point in querying the time every microsecond. you can only start recording with [writesf~] on block boundaries, so you could as well use [bang~].
a somewhat better approach might be to use OSC timestamps to start the recording synchronously.
recent versions of [packOSC] and [unpackOSC] (v0.3, available on deken) allow you¹ to use Pd's internal notion of time (rather than the system time). the time is synched to the system time (which should by synchronized with NTP) at startup (or manually via the "usepdtime" message).
gfmasdr IOhannes
¹ it's actually the default, but you can turn it off and use the system time.
[...]
there's really no point in querying the time every microsecond. you can only start recording with [writesf~] on block boundaries, so you could as well use [bang~].
Indeed, thanks for pointing this out!
I made a first abstraction using this approach (attached).
a somewhat better approach might be to use OSC timestamps to start the recording synchronously.
recent versions of [packOSC] and [unpackOSC] (v0.3, available on deken) allow you¹ to use Pd's internal notion of time (rather than the system time). the time is synched to the system time (which should by synchronized with NTP) at startup (or manually via the "usepdtime" message).
Just looked into its docs and despite relying on an external as well, I didn't find a straight way to make it work for my current application. But it is very good to know. :)
best, Peter
Le 16 oct. 2025 à 08:54, Peter P. peterparker@fastmail.com a écrit :
I tried polling [time] with a high speed [metro 0.001] and compare that time to a threshold value. But that maxes out cpu usage to 100%.
a workaround would be about creating an external combining writesf~ and a microseconds polling
ms is likely too long. I think you'd probably want to have an OS-level high frequency timer, think sending MIDI Clock sync. Those are likely to take more CPU but should not be 100%. Without knowing the details of the project, if it were me, I might try writing a simple C/C++ utility that runs a high frequency timer and sends out timing beats with you can periodically synchronize over OSC for the whole system of clients... but then that starts to fall into work already done by others... what is the timing accuracy of Ableton Link, for example? There is an external for it.
On Oct 16, 2025, at 10:47 AM, Patko nytkophilus colet.patrice@gmail.com wrote:
Le 16 oct. 2025 à 08:54, Peter P. peterparker@fastmail.com a écrit :
I tried polling [time] with a high speed [metro 0.001] and compare that time to a threshold value. But that maxes out cpu usage to 100%.
a workaround would be about creating an external combining writesf~ and a microseconds polling
pd-list@lists.iem.at - the Pure Data mailinglist https://lists.iem.at/hyperkitty/list/pd-list@lists.iem.at/message/CDQ7K5OOOS...
To unsubscribe send an email to pd-list-leave@lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://lists.iem.at/
Dan Wilcox danomatika.com http://danomatika.com/ robotcowboy.com http://robotcowboy.com/
Hi Peter, I just wanted to share my project here, in case it helps: https://github.com/samesimilar/m5_soundfile/
This is an attempt to implement versions of readsf~ and writesf~ that can start/stop on sample-accurate (internal-to-pd) timing (including in the 'past' via a buffer). This means that you should be able to schedule recording or playback within blocks at specific samples.
(I wrote these externals to enable more predictably synchronized looping on the Critter & Guitari '5 Moons' looper device.)