On 10/16/25 10:18, Peter P. wrote:
Do I understand correctly that one sample from my ADCs will arrive at a random moment within the lengh of one buffer in Pd?
depending on the API you are using, more or less: yes. (with callbacks, you might get a less arbitrary time - but i haven't actually verified this).
of course the logical time (within Pd) takes care of this fluctuation. it's just that the system time (as reported by [time]) might have different ideas about "now".
In my case I want to go for the best possible resolution without a dedicated radio clock and with standard laptop hardware. Is banging [time] at every microsecond still the best way maxing my cpu?
there's really no point in querying the time every microsecond. you can only start recording with [writesf~] on block boundaries, so you could as well use [bang~].
a somewhat better approach might be to use OSC timestamps to start the recording synchronously.
recent versions of [packOSC] and [unpackOSC] (v0.3, available on deken) allow you¹ to use Pd's internal notion of time (rather than the system time). the time is synched to the system time (which should by synchronized with NTP) at startup (or manually via the "usepdtime" message).
gfmasdr IOhannes
¹ it's actually the default, but you can turn it off and use the system time.