Thanks, everyone who gave suggestions for this. I was asking, thinking there might off hand be already a mechanism built into PD where it times the message processing phase. I guess not, but it's no biggie at all, for me at least, since in practice I think a can always know the longest length of the message processing chain, and snapshot the time at both ends.
What I didn't see mentioned, though, is that unlike benchmarking DSP computations, it would be quite important in benchmarking event (non-signal) processing, to provide some machinery to measure the _maximum_ processing time, and not just the average, or a periodically updated snapshot, which _might_ be ok for checking out DSP times. This is because, unlike DSP algorithms, which most often don't vary by order of magnitudes, event processsing time varies quite a lot - sometimes there is not much to be done, but sometimes a list might need to be sorted, multiple events output, etc..
I think for now, I will try to measure the event processing time in my application, and take the max over a good long experimental run. If this time plus the average DSP processing time is less than what would result in a DAC overrun, I figure I'm ok. Of course, as I typed this I realise that I don't know know off hand what that overrun time is - I don't think it's related to the DSP block size, but rather it would depend on the DAC buffer size determined by the command line parameters - correct?
Regards
Larry Troxler