Is there a way (or could there be made a way) to time how long it takes for PD to do its message processing when an event arrives?
In my case, I'm interested in measuring how long my scheme externs take to run (which I fear is longer than is good), but of course, such a feature might be usefull for anyone who is running a complex message process in an extern and wants to know if they run a chance of missing the next DSP deadline.
Just to be clear, I'm not talking about DSP processing time, but strictly the message passing time (which is intermittent, but might take a long time when it happens).
Regards
Larry Troxler
hi larry,
a quick and simple way is to create a "time" object that gets the current time (sec and msec) via gettimeofday(). should be a "empty" object with just one in and outlet that passes through all that comes in. at the moment something comes in, get the time, and at the end of the chain place the same object to get the time when the processed data comes out. the difference of booth times is an good approximation of the needed execution time. its not that accurate, as it depends on system functions to get the actual time. but it works, i use the same way in a jmax to meassure some things .... _THIS IS NOT USABLE FOR REAL PROFILING !!_ to make it clear. but its enough to get a good idea how much time is spent .....
greets,
chris
Am Sonntag, 2. Dezember 2001 02:24 schrieb Larry Troxler:
hi i once tried to figure out how much cpu load one single tilde-object is using. i put 100 (or 1000?) in a patch to be able to measure the value...) maybe the only way to measure the time of a messageprocess is to put 100 in a line (which hopefully takes longer than the DSP deadline), and then you could use a realtime-object to measure. but is it maybe useful for you to start each message process with a new DSP block, eg. to put a delay 0 object at the beginning of the message-line wich prevents an incoming event to intermit too late? marius.
----- Original Message ----- From: "Larry Troxler" lt@westnet.com To: pd-list@iem.kug.ac.at Sent: Sunday, December 02, 2001 2:24 AM Subject: [PD] Timing the message processing?
hi,
Christian Klippel wrote: ...
a quick and simple way is to create a "time" object that gets the current time (sec and msec) via gettimeofday().
This is more or less what [realtime] does. There is also a [cputime], which measures Pd-process execution time plus system time used by Pd-process.
marius schebella wrote: ...
i once tried to figure out how much cpu load one single tilde-object is using. i put 100 (or 1000?) in a patch to be able to measure the value...)
Perhaps a more handy way is to use only one copy of an object to be tested, and use [metro] to measure how [realtime] and [timer] outputs drift apart ([timer] objects measure logical time, i.e. give a number of dsp ticks multiplied by a logical number of msecs in a dsp tick). Then the value to watch is properly scaled ratio (realtime-timer)/timer.
Krzysztof
sorry... forgot to add, that in order to test message processing time one should use [until] to trigger multiple events in one dsp cycle.
K.
Krzysztof Czaja wrote: ...
Perhaps a more handy way is to use only one copy of an object to be tested, and use [metro] to measure how [realtime] and [timer] outputs
Thanks, everyone who gave suggestions for this. I was asking, thinking there might off hand be already a mechanism built into PD where it times the message processing phase. I guess not, but it's no biggie at all, for me at least, since in practice I think a can always know the longest length of the message processing chain, and snapshot the time at both ends.
What I didn't see mentioned, though, is that unlike benchmarking DSP computations, it would be quite important in benchmarking event (non-signal) processing, to provide some machinery to measure the _maximum_ processing time, and not just the average, or a periodically updated snapshot, which _might_ be ok for checking out DSP times. This is because, unlike DSP algorithms, which most often don't vary by order of magnitudes, event processsing time varies quite a lot - sometimes there is not much to be done, but sometimes a list might need to be sorted, multiple events output, etc..
I think for now, I will try to measure the event processing time in my application, and take the max over a good long experimental run. If this time plus the average DSP processing time is less than what would result in a DAC overrun, I figure I'm ok. Of course, as I typed this I realise that I don't know know off hand what that overrun time is - I don't think it's related to the DSP block size, but rather it would depend on the DAC buffer size determined by the command line parameters - correct?
Regards
Larry Troxler