does something like this exist? afaik not, but i think it would be useful to have some more or less objective and comparable method to measure how well a system is suited for running pd. there was a test patch for rjdj on the ipod/phone which consisted of simply as much osc~-objects as the device could handle. that worked quite well for checking if a patch would run on the device or not, but i think it might not cover all possible properties of a system. i wonder what such a benchmark should include: a mixture of floating point and integer computation, audio- and event calculation, filters, accessing tables, something else?
bis denn! martin
Hi Martin,
As it happens, I often use your patch chaosmonster1 as pure data benchmark. Here's why:
Amongst others I used chaosmonster1 to benchmark pd in double precision, as shown in the table halfway this page:
http://www.katjaas.nl/doubleprecision/doubleprecision.html
Another benchmark test, done last February when Raspberry Pi 2 was just out: Raspberry Pi B+ can run no more than one chaosmonster1 at default samplerate, while Raspberry Pi 2 can run five (!) instances of it, and my 1 GHz Core2Duo laptop can run eight instances.
Thanks! Katja
On Tue, May 5, 2015 at 6:12 PM, martin brinkmann mnb@martin-brinkmann.de wrote:
does something like this exist? afaik not, but i think it would be useful to have some more or less objective and comparable method to measure how well a system is suited for running pd. there was a test patch for rjdj on the ipod/phone which consisted of simply as much osc~-objects as the device could handle. that worked quite well for checking if a patch would run on the device or not, but i think it might not cover all possible properties of a system. i wonder what such a benchmark should include: a mixture of floating point and integer computation, audio- and event calculation, filters, accessing tables, something else?
bis denn! martin
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On 05/05/15 20:48, katja wrote:
- it runs with pd vanilla or extended
- it has a realistic mixture of dsp objects
but i think it lacks some message-processing, and maybe memory-access.
i have just tested 10 instances of chaosmonster on my desktop (core2 duo e8400 3 ghz) and got about 33 percent cpu usage by the pd process. (with gui though)
default samplerate, while Raspberry Pi 2 can run five (!) instances of it, and my 1 GHz Core2Duo laptop can run eight instances.
i have not tested the pi2 myself yet, but that sounds very good. it might be a good platform for building some standalone pd-instruments...
bis denn! martin
On Wed, May 6, 2015 at 11:03 AM, martin brinkmann mnb@martin-brinkmann.de wrote:
but i think it lacks some message-processing, and maybe memory-access.
Actually chaosmonster1 is heavy on memory access because of the feedback delay lines. But I just noticed [block~ 1] in the delay line subpatches, meaning the patch does an unusual proportion of function call overhead. Maybe the patch is not the most representative use case of pd for that reason.
i have not tested the pi2 myself yet, but that sounds very good. it might be a good platform for building some standalone pd-instruments...
That is probably true. Standalone and even portable or wearable. RPi2 still has modest current consumption (below 300 mA), therefore it can run on battery / powerbank.
Katja
On 06/05/15 11:37, katja wrote:
Actually chaosmonster1 is heavy on memory access because of the feedback delay lines.
yes, but maybe the delays are small enough to fit in the cache (if the cache is big enough), and defeating the cpu cache would make systems with small or big cache more comparable (i believe...)
But I just noticed [block~ 1] in the delay line subpatches, meaning the patch does an unusual proportion of function call overhead. Maybe the patch is not the most representative use case of pd for that reason.
yes, throwing out the [block~ 1] makes some difference in cpu usage: 10 instances use about 24 instead of 33 percent here. (and it is not that important for the sound) on the other hand i think reblocking is quite common, so that the 'pd benchmark patch' should contain some flanger or the like...
bis denn! martin
Hi,
On 05/05/2015 18:12, martin brinkmann wrote:
does something like this exist? afaik not, but i think it would be useful to have some more or less objective and comparable method to measure how well a system is suited for running pd. there was a test patch for rjdj on the ipod/phone which consisted of simply as much osc~-objects as the device could handle. that worked quite well for checking if a patch would run on the device or not, but i think it might not cover all possible properties of a system.
One problem with (totally un-scientific) benchmarking I've seen on Linux (on laptops and with Jack Audio) is that there are a few factors sucha as cpu scaling, wifi on/off, swappiness.. and of course type od soundard used i.e. all the 'audio on linux' stuff which an influence performance. I'm talking here mostly about 'audio benchmarking' more thn CPU etc. which means for instance how low latency you can get with a rather CPU intensive patch without (too many) xruns etc.
With heavy patches I have also noticed dramatic performance differences with different gui activity: e.g. the more number boxes, sliders etc. being 'continuously' updated (in the order of milliseconds) the worst performance is. Very hard to benchmark though because there are many factors.
Add GEM (and video cards, drivers.. ) and 'benchmarking' probably becomes a sort of black magic.
This doesn't really answer the question but thought it would be useful to throw in some additional complexity :)
Lorenzo.
On 05/11/2015 10:48 AM, Lorenzo Sutton wrote:
One problem with (totally un-scientific) benchmarking I've seen on Linux (on laptops and with Jack Audio) is that there are a few factors sucha as cpu scaling, wifi on/off, swappiness.. and
i'm wondering about swapiness...if your system does start to swap during performance, than you are f*ed anyhow.
but yes, there are some easy to fix (as in "fixate") parameters, that should be mentioned when doing anything "benchmark"-like.
Add GEM (and video cards, drivers.. ) and 'benchmarking' probably becomes a sort of black magic.
no, it's not black magic; it simply does not make much sense.
it's plain impossible to design a benchmark that yields a single comparable number that can be applied to all use cases.
if we want to do proper benchmarking, then we need a set of patches that tests for different aspects of your system.
it's also hard to design a benchmark that tests (say) multichannel audio I/O (i'm imagening something like >64 channels) and that should provide meaningful results on a stereo system.
gfmards IOhannes