I have a patch of medium complexity, with a handful of instruments~ and a bunch of sequencing and arranging-type message handling. On my speedy Intel laptop it has no problem and barely notches the CPU usage. However, when I run this patch on my teeny Geode-based UMPC it pegs CPU at 100%.
I'm pretty sure this is a denormal issue. There are a grand total of maybe 5 noise~, 5 osc~, 10 vline~, 5 lop~, and 1 delay line in the whole patch and not much else besides message processing... I wouldn't guess this to run me out of compute power.
Any hints on how to isolate where the denormals might be popping up? I have looked for signal processing loops, and the only ones I create are around the delayline (feedback) and I suppose in the iir implementation of the lop~.
Any help appreciated, Bill Gribble
A useful debug trick, to make sure denornals are the problem, is to inject a wee bit of noise into the path and see if it speeds up. My experiences of them in the past is that they take a while st show up, maybe many seconds or even minutes after the patch seems silent (rather than being present under quiescent conditions).
On Fri, 31 Oct 2008 16:27:08 -0400 Bill Gribble grib@billgribble.com wrote:
I have a patch of medium complexity, with a handful of instruments~ and a bunch of sequencing and arranging-type message handling. On my speedy Intel laptop it has no problem and barely notches the CPU usage. However, when I run this patch on my teeny Geode-based UMPC it pegs CPU at 100%.
I'm pretty sure this is a denormal issue. There are a grand total of maybe 5 noise~, 5 osc~, 10 vline~, 5 lop~, and 1 delay line in the whole patch and not much else besides message processing... I wouldn't guess this to run me out of compute power.
Any hints on how to isolate where the denormals might be popping up? I have looked for signal processing loops, and the only ones I create are around the delayline (feedback) and I suppose in the iir implementation of the lop~.
Any help appreciated, Bill Gribble
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Fri, Oct 31, 2008 at 04:27:08PM -0400, Bill Gribble wrote:
I have a patch of medium complexity, with a handful of instruments~ and a bunch of sequencing and arranging-type message handling. On my speedy Intel laptop it has no problem and barely notches the CPU usage. However, when I run this patch on my teeny Geode-based UMPC it pegs CPU at 100%.
i wouldn't expect a lot from a geode machine ;))
I'm pretty sure this is a denormal issue. There are a grand total of maybe 5 noise~, 5 osc~, 10 vline~, 5 lop~, and 1 delay line in the whole patch and not much else besides message processing... I wouldn't guess this to run me out of compute power.
pardon, but what the word 'denormals' means? ..never heard it
Any hints on how to isolate where the denormals might be popping up? I have looked for signal processing loops, and the only ones I create are around the delayline (feedback) and I suppose in the iir implementation of the lop~.
Any help appreciated, Bill Gribble
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Sun, 2008-11-02 at 22:07 +0000, errordeveloper@gmail.com wrote:
pardon, but what the word 'denormals' means? ..never heard it
http://en.wikipedia.org/wiki/Denormal
roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
Hi all,
Looking into this once again (I've had this problem for 10 years or more now) I just found out that gcc has a -ffast-math flag that prevents denormals for slowing the code down, as long as the CPU has SSE instructions. I don't know if the geode does or not, though!
On linux, at any rate, you can type CFLAGS="-ffast-math -O6" ./configure
at the appropriate moment when compiling Pd. I'm not sure how this will spin out in Windows, though.
Pd code is shot through with special tests to try to catch floating point operations to prevent them from making denormals, but apparently I haven't found every possible way they can come up.
cheers Miller
On Sun, Nov 02, 2008 at 10:07:13PM +0000, errordeveloper@gmail.com wrote:
On Fri, Oct 31, 2008 at 04:27:08PM -0400, Bill Gribble wrote:
I have a patch of medium complexity, with a handful of instruments~ and a bunch of sequencing and arranging-type message handling. On my speedy Intel laptop it has no problem and barely notches the CPU usage. However, when I run this patch on my teeny Geode-based UMPC it pegs CPU at 100%.
i wouldn't expect a lot from a geode machine ;))
I'm pretty sure this is a denormal issue. There are a grand total of maybe 5 noise~, 5 osc~, 10 vline~, 5 lop~, and 1 delay line in the whole patch and not much else besides message processing... I wouldn't guess this to run me out of compute power.
pardon, but what the word 'denormals' means? ..never heard it
Any hints on how to isolate where the denormals might be popping up? I have looked for signal processing loops, and the only ones I create are around the delayline (feedback) and I suppose in the iir implementation of the lop~.
Any help appreciated, Bill Gribble
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Looking into this once again (I've had this problem for 10 years or more now) I just found out that gcc has a -ffast-math flag that prevents denormals for slowing the code down, as long as the CPU has SSE instructions. I don't know if the geode does or not, though!
according to wikipedia not all geode processors support sse instructions ... the sse unit can be configured to handle denormals as zero, by setting the MXCSR control register, no need to rely on specific compiler flags ... iirc, i added the specific code to the devel_0_37 a few years ago ...
tim
Yep, I remember trying the MXCSR thing once and it not working, but I
forget what processor it was on. If indeed it's equivalent to using this
gcc flag, I'm happier usin the compiler flag because it keeps the code
cleaner.
OTOH, it sounds like non-SSE processors will always need code to check things
manually.
cheers Miller
On Mon, Nov 03, 2008 at 10:25:55PM +0000, Tim Blechmann wrote:
Looking into this once again (I've had this problem for 10 years or more now) I just found out that gcc has a -ffast-math flag that prevents denormals for slowing the code down, as long as the CPU has SSE instructions. I don't know if the geode does or not, though!
according to wikipedia not all geode processors support sse instructions ... the sse unit can be configured to handle denormals as zero, by setting the MXCSR control register, no need to rely on specific compiler flags ... iirc, i added the specific code to the devel_0_37 a few years ago ...
tim
-- tim@klingt.org http://tim.klingt.org
It is better to make a piece of music than to perform one, better to perform one than to listen to one, better to listen to one than to misuse it as a means of distraction, entertainment, or acquisition of 'culture'. John Cage
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list