Hi all
Lately I was asking myself if some of own patching practices regarding performance optimization were justified or based on some wrong beliefs.
I often use [*~ ] as on/off signal gates and now started be concerned about using an object that performs a relatively complex task (multiplication of two floating point numbers) for such a simple task. I imagined that an object that either outputs a copy of the input or outputs zeros would be a less expensive on/off signal gate than [*~]. I created an abstraction containing this:
[inlet~] [inlet] | | | [switch~ ] | [outlet~]
Let's call this abstraction [gate~ ]. It turned out to work as supposed. But is [gate~] really cheaper than [*~ ]? I made a test by connecting lots of [gate~]s to a chain and measure the CPU usage. For simplicity reason, let's just use an invented arbitrary unit for expressing the CPU time (ct) consumed by an object. It turned out that [gate~] uses 0.52ct when it is on and 0.4ct when it is off. But how much does [*~ ] use? No matter whether turned on or off, [*~ ] uses a stable 0.39ct.
The relatively complex multiplication is _not_ more expensive than the on/off implementation with [switch~ ]. Even when turned off, the [switch~] approach is still more expensive.
But the really interesting finding comes now. [*~ 0] has only 0.2ct! Almost the the ct value of a plain [*~ ] halved! It also doesn't matter what value the argument has. The plain fact of specifying an argument makes [*~ ] a lot cheaper. I also tested [/~ ], [+~ ] and [-~ ] and the same applies for those. They all have 0.39ct without argument and only 0.2ct with an argument specified. Depending on the kind of patch, this allows for quite a significant performance improvement.
I also measured the ct of a [*~ ] when a signal wire is connected to the right inlet. It costs exactly as much (0.39ct) as when sending messages to the right inlet of a [*~ ] without argument.
My interpretation is that [*~ 0] and [*~ ] are two different objects. The latter always performs a calculation with two signals and implicitly converts a message on the right inlet to a signal, where the former really only deals with messages on the right inlet (and thus is cheaper).
On a completely different note, I wanted to know if it costs anything to have signals entering and leaving subpatches and abstractions a lot, respectively if [inlet~] and [outlet~] add some overhead in CPU time. I chained tens of thousands subpatches together and it seems that does not consume any additional CPU time at all. The values for [inlet~ ] and [outlet~] are much below 0.01ct. I haven't tested the cost, when more than one signal wire are going to an [inlet~], though.
Roman