Years ago I found that 4096 was the right size for the cosine tables w/ linear interpolation in my externals, if I wanted to not hear any results of artifacts (usually). It was a while ago and I don't exactly remember what audio tests I did tho.
anyways +1 for 4096.
-seb
Date: Thu, 6 Jun 2024 10:25:50 -0500
From: Miller Puckette mpuckette@cloud.ucsd.edu
To: Matt Barber brbrofsvl@gmail.com, Alexandre Torres Porres
porres@gmail.com
Cc: Christof Ressi info@christofressi.com, Miller Puckette
msp@ucsd.edu, pd-dev@lists.iem.at
Subject: Re: [PD-dev] 64-bit (and/or ARM) Pd change default cosine
table?
Message-ID: 44fd2d44-3601-4c1a-8e88-44516df22797@cloud.ucsd.edu
Content-Type: text/plain; charset=UTF-8; format=flowed
Well, as far as I can tell making the table "symmetric" won't matter at
all since, for instance, 0.1 and 0.9 won't give the same lookup values
anyway because they can't themselves be represented exactly and will be
truncated differently (0.1 will be more accurately represented than
0.9).? On the other hand, values like 0.25 or -0.5 can be represented
exactly so it might be worthwhile to bash true 1s, -1,s, and 0s where
they belong in the table.
Hearing that Max defaults to a ridiculously big table makes me wonder
though... first, is 2048 really enough (and at what point is there a
real performance penalty for bigger tables).? And: not for this release
but later perhaps, should 64-bit Pd use a bigger table?
As I figure it, the 2048-point table differs from the true cosine,
absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7
bit accuracy.? But the error is dominated by an amplitude change (the
best-matching cosine to the line-segment approximation has amplitude
less than 1).? Accounting for that and taking RMS error instead of
worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits,
which is close to the accuracy of a 32-bit number.
I don't have my RPI3 handy (I'm on the road) but I'm now wondering if
the default shouldn't be 4096, which would give us an additional 2 bits
of goodness.? Any opinions?
cheers
M
On 6/5/24 9:35 PM, Matt Barber wrote:
A couple of things:
- I'm pretty sure any error in cos at pi and 2pi will be smaller in
double precision than float's epsilon, so I don't think that there's
any need to set -1.0 and 1.0 explicitly after all except to be extra
safe. However, at pi/2 and 3pi/2 the error is still larger than the
minimum normal number, so it is worth setting the zero crossings to 0.0.
- For garray_dofo() there isn't a great way of using explicit 0.0 at
zero crossings without incurring an extra check, like don't add to the
sum if absolute value is less than e.g. 1.0e-10. For this, probably
just using M_PI and incrementing integer phase like for the cosine
table is enough.
MB
On Wed, Jun 5, 2024 at 2:20?PM Alexandre Torres Porres
porres@gmail.com wrote:
Em qua., 5 de jun. de 2024 ?s 14:31, Matt Barber
brbrofsvl@gmail.com escreveu:
While we're at it, I think it would be worth tuning
garray_dofo() to use the same so that sinesum and
cosinesum?have the same level of accuracy, guarantees of
symmetry, etc.
MB
Good catch! In fact, I think this is a great opportunity to also
fix this bug https://github.com/pure-data/pure-data/issues/371
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/371__;!!Mih3wA!Gx7B-gwSgjsuIXmREh2__bBbYdt1d6pi29crpkLOOyltinVweZR3u6Q6vl9ItouugFy2oefgYhPlew$
which is totally related. I just reopened
https://github.com/pure-data/pure-data/issues/105
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/105__;!!Mih3wA!Gx7B-gwSgjsuIXmREh2__bBbYdt1d6pi29crpkLOOyltinVweZR3u6Q6vl9ItouugFy2oedw4qUPfQ$
as well as I'm still considering the table could/should be still
"perfectly symmetric" considering 0 crossings and the start/end
points.
On Wed, Jun 5, 2024 at 12:52?PM Alexandre Torres Porres
porres@gmail.com wrote:
For the record and sake of comparison, Cyclone uses
a?16384 points table, and linear interpolation, calculated
with double precision. We did this because MAX documents
it uses such a table, and we made it (well, Matt did)
simetric.
I see Pd is doing kind of the same, huh? linear
interpolation on a table calculated with double precision.
I see SuperCollider mentions it uses?8192 points and
linear interpolation on its oscillator.
I guess MAX is exaggerating its table size a bit :) but I
wonder why Pd is still about to use a relatively smaller
table size. I'm curious to know how much an increase in
table size actually offers a better resolution and how
much it ruins performance. For instance, I'm using the
same as Cyclone in ELSE oscillators, could I just reduce
it at least to 8192 points or even less and down to Pd's
2048 size worry free?
Thanks
Em qua., 5 de jun. de 2024 ?s 13:28, Alexandre Torres
Porres porres@gmail.com escreveu:
Nice one Matt!
Em qua., 5 de jun. de 2024 ?s 08:13, Christof Ressi
info@christofressi.com escreveu:
@Miller: what do you think? IMO we should
make the cos table as good as we can, so we
won't have any regrets :)
+1000!!!
------------------------------
Message: 2
Date: Thu, 6 Jun 2024 12:51:45 -0400
From: Matt Barber brbrofsvl@gmail.com
To: Miller Puckette mpuckette@cloud.ucsd.edu
Cc: Alexandre Torres Porres porres@gmail.com, Christof Ressi
info@christofressi.com, Miller Puckette msp@ucsd.edu,
pd-dev@lists.iem.at
Subject: Re: [PD-dev] 64-bit (and/or ARM) Pd change default cosine
table?
Message-ID:
CAOrke7GRQ+j7_HV5mSz1hXFqh+BpHaAOZtppFaXM2gN7j8mcyA@mail.gmail.com
Content-Type: text/plain; charset="utf-8"
Since cos~ wraps, one could theoretically take advantage of the equal
distribution of float values between 1.0 and 2.0.
Profiling a larger table would be useful ? I prefer accuracy over
performance in general, but I wonder where the performance hit would come
from, outside of unpredictable cache misses.
On Thu, Jun 6, 2024, 11:25?AM Miller Puckette mpuckette@cloud.ucsd.edu
wrote:
Well, as far as I can tell making the table "symmetric" won't matter at
all since, for instance, 0.1 and 0.9 won't give the same lookup values
anyway because they can't themselves be represented exactly and will be
truncated differently (0.1 will be more accurately represented than
0.9). On the other hand, values like 0.25 or -0.5 can be represented
exactly so it might be worthwhile to bash true 1s, -1,s, and 0s where
they belong in the table.
Hearing that Max defaults to a ridiculously big table makes me wonder
though... first, is 2048 really enough (and at what point is there a
real performance penalty for bigger tables). And: not for this release
but later perhaps, should 64-bit Pd use a bigger table?
As I figure it, the 2048-point table differs from the true cosine,
absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7
bit accuracy. But the error is dominated by an amplitude change (the
best-matching cosine to the line-segment approximation has amplitude
less than 1). Accounting for that and taking RMS error instead of
worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits,
which is close to the accuracy of a 32-bit number.
I don't have my RPI3 handy (I'm on the road) but I'm now wondering if
the default shouldn't be 4096, which would give us an additional 2 bits
of goodness. Any opinions?
cheers
M
On 6/5/24 9:35 PM, Matt Barber wrote:
A couple of things:
- I'm pretty sure any error in cos at pi and 2pi will be smaller in
double precision than float's epsilon, so I don't think that there's
any need to set -1.0 and 1.0 explicitly after all except to be extra
safe. However, at pi/2 and 3pi/2 the error is still larger than the
minimum normal number, so it is worth setting the zero crossings to 0.0.
- For garray_dofo() there isn't a great way of using explicit 0.0 at
zero crossings without incurring an extra check, like don't add to the
sum if absolute value is less than e.g. 1.0e-10. For this, probably
just using M_PI and incrementing integer phase like for the cosine
table is enough.
MB
On Wed, Jun 5, 2024 at 2:20?PM Alexandre Torres Porres
porres@gmail.com wrote:
Em qua., 5 de jun. de 2024 ?s 14:31, Matt Barber
brbrofsvl@gmail.com escreveu:
While we're at it, I think it would be worth tuning
garray_dofo() to use the same so that sinesum and
cosinesum have the same level of accuracy, guarantees of
symmetry, etc.
MB
Good catch! In fact, I think this is a great opportunity to also
fix this bug https://github.com/pure-data/pure-data/issues/371
<
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/37...
which is totally related. I just reopened
https://github.com/pure-data/pure-data/issues/105
<
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/10...
as well as I'm still considering the table could/should be still
"perfectly symmetric" considering 0 crossings and the start/end
points.
On Wed, Jun 5, 2024 at 12:52?PM Alexandre Torres Porres
porres@gmail.com wrote:
For the record and sake of comparison, Cyclone uses
a 16384 points table, and linear interpolation, calculated
with double precision. We did this because MAX documents
it uses such a table, and we made it (well, Matt did)
simetric.
I see Pd is doing kind of the same, huh? linear
interpolation on a table calculated with double precision.
I see SuperCollider mentions it uses 8192 points and
linear interpolation on its oscillator.
I guess MAX is exaggerating its table size a bit :) but I
wonder why Pd is still about to use a relatively smaller
table size. I'm curious to know how much an increase in
table size actually offers a better resolution and how
much it ruins performance. For instance, I'm using the
same as Cyclone in ELSE oscillators, could I just reduce
it at least to 8192 points or even less and down to Pd's
2048 size worry free?
Thanks
Em qua., 5 de jun. de 2024 ?s 13:28, Alexandre Torres
Porres porres@gmail.com escreveu:
Nice one Matt!
Em qua., 5 de jun. de 2024 ?s 08:13, Christof Ressi
info@christofressi.com escreveu:
@Miller: what do you think? IMO we should
make the cos table as good as we can, so we
won't have any regrets :)
+1000!!!