One other question: would you accept patches for Pd Vanilla that make it _possible_ to compile with t_float at double-precision (something Pd Vanilla cannot currently do)?  That would give the Pd Vanilla user the option to compile to double-precision if they wish, which IIUC is the whole point of t_float in the first place.  (Plus Vanilla users would get the small performance increase in the relevant tilde classes.)

You'd still compile, distribute, and support Vanilla for t_float at single-precision.  Same for external developers.

-Jonathan
On Monday, February 2, 2015 11:49 AM, Miller Puckette <msp@ucsd.edu> wrote:


What I've heard is that the 64-bit instruction set has wider bit fields
for specifying registers, so that you can have many more of them.  (The
386 had two or three I think; the 64 bit machines have dozens, depending
how you count.)  So one saves steps reading and writing to/from memory.

OTOH, since all pointers have to be 64 bits, one uses more memory as a whole,
perhaps by a factor of 1.5 or so - I don't see why, given that memory is
"the main bottleneck" most of the time, this could possibly be consistent
with 64-bit architectures being faster.  So basically I don't understand
what's really going on.

cheers
Miller

On Mon, Feb 02, 2015 at 04:25:18PM +0000, Jonathan Wilkes wrote:
> Hi Miller,What do you think is causing that performance increase on the version of Pd that is compiled for the 64-bit architecture?
> -Jonathan
>