OK, so in a few years every new PC will probably have ARM or other RISC architecture.
I just made the interesting discovery that, on Mac ARMs, there is a difference, probably a slight decrease, in numerical accuracy in Pd's DSP objects. So we've lost exact compatibliity and only have pretty-good-approximate compatibility. On one piece I tested the divergence is about -100 dB relative to maximum amplitude and in another, which I think has unstable feedback paths, the results are further off.
SO... I can now relax my insistence on exact back-compatibility for osc~ and cos~ and ... make them more accurate! I think I should do this for 0.55.
Now for the questions:
1. Unfortunately COSTABLESIZE (512) is declared in m_pd.h. Can I change this value (conditionally, increasing it for 64-bit PDs and for some or all ARM architectures) without breaking externals that might use the built-in cosine table?
2. should I make the table size variable, either by a new [declare] flag, or by passing a flag to osc~ and cos~? This could affect run time - I'd want to investigate that.
3. Alternatively, should I just leave COSTABSIZEE at 512 for specific architectures in non-double (Intel for compatibilty, and possibly RPI if there turns out to be a bad performance hit). I'd choose a new one after doing some profiling because at some point increasing table size will lead to bad cache behavior. (Historical note: the number 512 gives a 4096-point table which is the memory page size of the Intel I860, beyond qhich performance dropped by a factor of 10 or more. Recently I tested a 2048-point table, which is 36dB lower-noise, on a bog-standard Intel Linuix machine and... saw no penalty at all).
I'm afraid it will take me (and whoever else is interested in this) some time to figure everything out. But I think at 0.55 we're still at a point where we don't have to be extremely strict about numerical reproducibilty on ARM/Macintosh or on Pd-double, and so this seems a good time to attack this. It's been a long-standing problem.
cheers
Miller
maybe you should also make sqrt~ should call the true functionwhile at it?
references https://github.com/pure-data/pure-data/issues/1906 / https://github.com/pure-data/pddp/issues/125#issuecomment-1353459554
cheers
Em dom., 26 de mai. de 2024 às 06:16, Miller Puckette < mpuckette@cloud.ucsd.edu> escreveu:
OK, so in a few years every new PC will probably have ARM or other RISC architecture.
I just made the interesting discovery that, on Mac ARMs, there is a difference, probably a slight decrease, in numerical accuracy in Pd's DSP objects. So we've lost exact compatibliity and only have pretty-good-approximate compatibility. On one piece I tested the divergence is about -100 dB relative to maximum amplitude and in another, which I think has unstable feedback paths, the results are further off.
SO... I can now relax my insistence on exact back-compatibility for osc~ and cos~ and ... make them more accurate! I think I should do this for 0.55.
Now for the questions:
- Unfortunately COSTABLESIZE (512) is declared in m_pd.h. Can I
change this value (conditionally, increasing it for 64-bit PDs and for some or all ARM architectures) without breaking externals that might use the built-in cosine table?
- should I make the table size variable, either by a new [declare]
flag, or by passing a flag to osc~ and cos~? This could affect run time
- I'd want to investigate that.
- Alternatively, should I just leave COSTABSIZEE at 512 for specific
architectures in non-double (Intel for compatibilty, and possibly RPI if there turns out to be a bad performance hit). I'd choose a new one after doing some profiling because at some point increasing table size will lead to bad cache behavior. (Historical note: the number 512 gives a 4096-point table which is the memory page size of the Intel I860, beyond qhich performance dropped by a factor of 10 or more. Recently I tested a 2048-point table, which is 36dB lower-noise, on a bog-standard Intel Linuix machine and... saw no penalty at all).
I'm afraid it will take me (and whoever else is interested in this) some time to figure everything out. But I think at 0.55 we're still at a point where we don't have to be extremely strict about numerical reproducibilty on ARM/Macintosh or on Pd-double, and so this seems a good time to attack this. It's been a long-standing problem.
cheers
Miller
Pd-dev mailing list Pd-dev@lists.iem.at https://lists.puredata.info/listinfo/pd-dev
I'd agree that the costable size could be increased. I think we can leave it constant, possibly configurable at compile time. At most, I would make it configurable via a command line argument. But this can always be done as a later step.
If we want to keep exposing the cos table to externals, we should export a dedicated function to get the table size at runtime, e.g. cos_table_size(), so we are free to change the size without breaking existing externals. If we decide to keep a fixed table size (for now), COSTABSIZE should be reserved for internal use (as an optimization).
Are there any existing externals that do use cos_table? When in doubt, we could keep the old cos_table around, but deprecate it. In the future we can remove the cos_table symbol, so old externals simply won't load.
For example:
<m_pd.h>
#ifdef PD_INTERNAL #define LOGCOSTABSIZE 9 #define COSTABSIZE (1<<LOGCOSTABSIZE) #endif
/* the cos table */ EXTERN float *pd_cos_table;
/* get the cos table size at runtime; always a power of two! */ EXTERN int cos_table_size(void);
/* old cos table for backwards compatibility with old externals; do not use! */ PD_DEPRECATED EXTERN float *cos_table;
Alternatively, we can break source compatibility and remove cos_table altogether, but keep exporting it "secretely", like we do with the error() function.
Cheers,
Christof
On 26.05.2024 11:16, Miller Puckette wrote:
OK, so in a few years every new PC will probably have ARM or other RISC architecture.
I just made the interesting discovery that, on Mac ARMs, there is a difference, probably a slight decrease, in numerical accuracy in Pd's DSP objects. So we've lost exact compatibliity and only have pretty-good-approximate compatibility. On one piece I tested the divergence is about -100 dB relative to maximum amplitude and in another, which I think has unstable feedback paths, the results are further off.
SO... I can now relax my insistence on exact back-compatibility for osc~ and cos~ and ... make them more accurate! I think I should do this for 0.55.
Now for the questions:
1. Unfortunately COSTABLESIZE (512) is declared in m_pd.h. Can I change this value (conditionally, increasing it for 64-bit PDs and for some or all ARM architectures) without breaking externals that might use the built-in cosine table?
2. should I make the table size variable, either by a new [declare] flag, or by passing a flag to osc~ and cos~? This could affect run time - I'd want to investigate that.
- Alternatively, should I just leave COSTABSIZEE at 512 for specific
architectures in non-double (Intel for compatibilty, and possibly RPI if there turns out to be a bad performance hit). I'd choose a new one after doing some profiling because at some point increasing table size will lead to bad cache behavior. (Historical note: the number 512 gives a 4096-point table which is the memory page size of the Intel I860, beyond qhich performance dropped by a factor of 10 or more. Recently I tested a 2048-point table, which is 36dB lower-noise, on a bog-standard Intel Linuix machine and... saw no penalty at all).
I'm afraid it will take me (and whoever else is interested in this) some time to figure everything out. But I think at 0.55 we're still at a point where we don't have to be extremely strict about numerical reproducibilty on ARM/Macintosh or on Pd-double, and so this seems a good time to attack this. It's been a long-standing problem.
cheers
Miller
Pd-dev mailing list Pd-dev@lists.iem.at https://lists.puredata.info/listinfo/pd-dev
Am 26. Mai 2024 16:00:58 MESZ schrieb Christof Ressi info@christofressi.com:
I'd agree that the costable size could be increased. I
/* the cos table */ EXTERN float *pd_cos_table;
I would prefer if we made this a read-only getter function:
``` EXTERN const float *pd_cos_table(void); ```
Alternatively, we can break source compatibility and remove cos_table altogether, but keep exporting it "secretely", like we do with the error() function.
+1
mfg.sfg.jfd IOhannes
On 26.05.2024 17:17, IOhannes m zmölnig wrote:
Am 26. Mai 2024 16:00:58 MESZ schrieb Christof Ressi info@christofressi.com:
I'd agree that the costable size could be increased. I
/* the cos table */ EXTERN float *pd_cos_table;
I would prefer if we made this a read-only getter function:
EXTERN const float *pd_cos_table(void);
+1
Em dom., 26 de mai. de 2024 às 11:02, Christof Ressi info@christofressi.com escreveu:
Are there any existing externals that do use cos_table? When in doubt, we could keep the old cos_table around, but deprecate it. In the future we can remove the cos_table symbol, so old externals simply won't load.
Cyclone used to have it, but we removed long ago when updating [cycle~]. Old versions still have it of course and 'nilwind' (which is basically an old version of Cyclone) still has it https://github.com/electrickery/pd-nilwind/blob/cf4f468f5585608c997c501cf305... but can be updated if the idea is to maintain an old version of cylone...
So far I found code in cyclone (sic.c) and in "fofsynth~" in ggee (under "experimental"). Both of these seem to check whether cos_table is nonzero. So I suggest that I simply not export cos_table and remove both cos_table and COSTABSIZE from m_pd.h. Attempts to compile them for newer architectures will then fail but they both seem to contain their own table-making code and can be fixed simply by removing the reference to an external cos_table.
I'm mostly concerned with keeping binary compatibliity, not source compatibliity, with old externs - and this probably only matters on old architectures (powerpc, intel 32-bit). For those I can keep generating cos_table but not make it available to m_pd.h, and for newer architectures (including anything ARM I think) I can simply not supply cos_table at all.
I really don't know why I ever thought it a good idea to export these :)
M
On 5/26/24 6:19 PM, Alexandre Torres Porres wrote:
Em dom., 26 de mai. de 2024 às 11:02, Christof Ressi info@christofressi.com escreveu:
Are there any existing externals that do use cos_table? When in doubt, we could keep the old cos_table around, but deprecate it. In the future we can remove the cos_table symbol, so old externals simply won't load.
Cyclone used to have it, but we removed long ago when updating [cycle~]. Old versions still have it of course and 'nilwind' (which is basically an old version of Cyclone) still has it https://github.com/electrickery/pd-nilwind/blob/cf4f468f5585608c997c501cf305... https://urldefense.com/v3/__https://github.com/electrickery/pd-nilwind/blob/cf4f468f5585608c997c501cf3057d45fc2402b9/shared/sickle/sic.c*L69__;Iw!!Mih3wA!EUv21nqXrabdxVCU3iqtXuD70WmtWgewbGfUNbhPX3yx0dyANcb10KOrHUEG7_qvY9VQ0CFttQ$ but can be updated if the idea is to maintain an old version of cylone...
Pd-dev mailing list Pd-dev@lists.iem.at https://urldefense.com/v3/__https://lists.puredata.info/listinfo/pd-dev__;!!...
Well, I made the table size a variable and... oscillators slowed down by 10%. So I have to rethink my brilliant plan, perhaps include two baked-in versions of cos~, osc~, and vcf~, one at 512 and one at 2048.... rats.
Miller
On 5/27/24 7:39 PM, Alexandre Torres Porres wrote:
Em seg., 27 de mai. de 2024 às 06:59, Miller Puckette mpuckette@cloud.ucsd.edu escreveu:
So far I found code in cyclone (sic.c)
But that's an old cyclone, like I said :) I think we don't use it anymore since 0.3
Hi all,
I've attached a patch that should illustrate the difference between 0.54 and the new 0.55 (as of test3) cos~ behavior.
A couple of thoughts regarding the code I submitted some time ago to address the cosine symmetry, linked in the thread above.
For those who don't want to dig, it constructs the first quarter of the cosine table, and then manually copies those samples to the respective places in the rest of the table purely by symmetry and negation. It also makes sure that the zero crossings are exactly zero and the peaks exactly 1 and -1. IIRC I was doing this on an old mac laptop, one of the first intel models, and I couldn't get a symmetric table even using double precision and Christof's method of incrementing index and dividing by table size, rather than adding a constant phase increment. I think something was weird in clang's cos function, or a compiler flag wasn't working, or something -- it was so long ago and I gave up and just did the whole thing manually -- that's what we ended up using in cyclone's [cycle~] object.
Miller:
I don't very much like that code
I didn't think you did; but I hope you understand the motivation. :)
Christof:
This should ensure that the table is symmetric, unless the underlying cos() function is broken :)
I think the underlying cos() function is dependent on architecture and compiler? Even with the new cosine table, on my machine the zero crossings have a (very tiny) residual, so it's sitting at a very small DC offset. The rest of the function looks symmetric, though, and certainly much, much better than the previous one.
Matt
On Tue, May 28, 2024 at 2:08 AM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Well, I made the table size a variable and... oscillators slowed down by 10%. So I have to rethink my brilliant plan, perhaps include two baked-in versions of cos~, osc~, and vcf~, one at 512 and one at 2048.... rats.
Miller
On 5/27/24 7:39 PM, Alexandre Torres Porres wrote:
Em seg., 27 de mai. de 2024 às 06:59, Miller Puckette mpuckette@cloud.ucsd.edu escreveu:
So far I found code in cyclone (sic.c)
But that's an old cyclone, like I said :) I think we don't use it anymore since 0.3
Pd-dev mailing list Pd-dev@lists.iem.at https://lists.puredata.info/listinfo/pd-dev
Hi Matt,
thanks for chiming in!
Christof:
This should ensure that the table is symmetric, unless the underlying cos() function is broken :)
I think the underlying cos() function is dependent on architecture and compiler?
Yes. cos() is a library function, so the output depends on the particular C library implementation, the compiler and the also architecture. I would still expect it to be (reasonably) symmetric, but I haven't really checked.
Even with the new cosine table, on my machine the zero crossings have a (very tiny) residual, so it's sitting at a very small DC offset.
You're right, I didn't consider the peaks and zero-crossings! The values of 1/2 PI, PI and 3/2 PI cannot be accurately represented as floating point numbers, so the result of cos() may be a bit off.
I think we should explicitly set these points:
cos_newtable[COSTABLESIZE / 4] = 0.0; /* 1/2 PI */ cos_newtable[COSTABLESIZE / 2] = -1.0; /* PI */ cos_newtable[COSTABLESIZE / 4 * 3] = 0.0; /* 3/2 PI */
@Miller: what do you think? IMO we should make the cos table as good as we can, so we won't have any regrets :)
The rest of the function looks symmetric, though, and certainly much, much better than the previous one.
Thanks for checking!
Christof
Hi Christof,
Thanks!
Don't forget the guard point, which could also be set explicitly to 1.0
Matt
On Wed, Jun 5, 2024, 4:29 AM Christof Ressi info@christofressi.com wrote:
Hi Matt,
thanks for chiming in!
Christof:
This should ensure that the table is symmetric, unless the underlying cos() function is broken :)
I think the underlying cos() function is dependent on architecture and compiler?
Yes. cos() is a library function, so the output depends on the particular C library implementation, the compiler and the also architecture. I would still expect it to be (reasonably) symmetric, but I haven't really checked.
Even with the new cosine table, on my machine the zero crossings have a (very tiny) residual, so it's sitting at a very small DC offset.
You're right, I didn't consider the peaks and zero-crossings! The values of 1/2 PI, PI and 3/2 PI cannot be accurately represented as floating point numbers, so the result of cos() may be a bit off.
I think we should explicitly set these points:
cos_newtable[COSTABLESIZE / 4] = 0.0; /* 1/2 PI */ cos_newtable[COSTABLESIZE / 2] = -1.0; /* PI */ cos_newtable[COSTABLESIZE / 4 * 3] = 0.0; /* 3/2 PI */
@Miller: what do you think? IMO we should make the cos table as good as we can, so we won't have any regrets :)
The rest of the function looks symmetric, though, and certainly much, much better than the previous one.
Thanks for checking!
Christof _______________________________________________ Pd-dev mailing list Pd-dev@lists.iem.at https://lists.puredata.info/listinfo/pd-dev
On 05.06.2024 13:09, Matt Barber wrote:
Hi Christof,
Thanks!
Don't forget the guard point, which could also be set explicitly to 1.0
True! Thanks!
christof
Matt
On Wed, Jun 5, 2024, 4:29 AM Christof Ressi info@christofressi.com wrote:
Hi Matt, thanks for chiming in!
Christof: This should ensure that the table is symmetric, unless the underlying cos() function is broken :) I think the underlying cos() function is dependent on architecture and compiler?
Yes. cos() is a library function, so the output depends on the particular C library implementation, the compiler and the also architecture. I would still expect it to be (reasonably) symmetric, but I haven't really checked.
Even with the new cosine table, on my machine the zero crossings have a (very tiny) residual, so it's sitting at a very small DC offset.
You're right, I didn't consider the peaks and zero-crossings! The values of 1/2 PI, PI and 3/2 PI cannot be accurately represented as floating point numbers, so the result of cos() may be a bit off. I think we should explicitly set these points: cos_newtable[COSTABLESIZE / 4] = 0.0; /* 1/2 PI */ cos_newtable[COSTABLESIZE / 2] = -1.0; /* PI */ cos_newtable[COSTABLESIZE / 4 * 3] = 0.0; /* 3/2 PI */ @Miller: what do you think? IMO we should make the cos table as good as we can, so we won't have any regrets :)
The rest of the function looks symmetric, though, and certainly much, much better than the previous one.
Thanks for checking! Christof _______________________________________________ Pd-dev mailing list Pd-dev@lists.iem.at https://lists.puredata.info/listinfo/pd-dev
Nice one Matt!
Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi info@christofressi.com escreveu:
@Miller: what do you think? IMO we should make the cos table as good as we
can, so we won't have any regrets :)
+1000!!!
For the record and sake of comparison, Cyclone uses a 16384 points table, and linear interpolation, calculated with double precision. We did this because MAX documents it uses such a table, and we made it (well, Matt did) simetric.
I see Pd is doing kind of the same, huh? linear interpolation on a table calculated with double precision.
I see SuperCollider mentions it uses 8192 points and linear interpolation on its oscillator.
I guess MAX is exaggerating its table size a bit :) but I wonder why Pd is still about to use a relatively smaller table size. I'm curious to know how much an increase in table size actually offers a better resolution and how much it ruins performance. For instance, I'm using the same as Cyclone in ELSE oscillators, could I just reduce it at least to 8192 points or even less and down to Pd's 2048 size worry free?
Thanks
Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres Porres < porres@gmail.com> escreveu:
Nice one Matt!
Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi < info@christofressi.com> escreveu:
@Miller: what do you think? IMO we should make the cos table as good as
we can, so we won't have any regrets :)
+1000!!!
While we're at it, I think it would be worth tuning garray_dofo() to use the same so that sinesum and cosinesum have the same level of accuracy, guarantees of symmetry, etc.
MB
On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres porres@gmail.com wrote:
For the record and sake of comparison, Cyclone uses a 16384 points table, and linear interpolation, calculated with double precision. We did this because MAX documents it uses such a table, and we made it (well, Matt did) simetric.
I see Pd is doing kind of the same, huh? linear interpolation on a table calculated with double precision.
I see SuperCollider mentions it uses 8192 points and linear interpolation on its oscillator.
I guess MAX is exaggerating its table size a bit :) but I wonder why Pd is still about to use a relatively smaller table size. I'm curious to know how much an increase in table size actually offers a better resolution and how much it ruins performance. For instance, I'm using the same as Cyclone in ELSE oscillators, could I just reduce it at least to 8192 points or even less and down to Pd's 2048 size worry free?
Thanks
Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres Porres < porres@gmail.com> escreveu:
Nice one Matt!
Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi < info@christofressi.com> escreveu:
@Miller: what do you think? IMO we should make the cos table as good as
we can, so we won't have any regrets :)
+1000!!!
Em qua., 5 de jun. de 2024 às 14:31, Matt Barber brbrofsvl@gmail.com escreveu:
While we're at it, I think it would be worth tuning garray_dofo() to use the same so that sinesum and cosinesum have the same level of accuracy, guarantees of symmetry, etc.
MB
Good catch! In fact, I think this is a great opportunity to also fix this bug https://github.com/pure-data/pure-data/issues/371 which is totally related. I just reopened https://github.com/pure-data/pure-data/issues/105 as well as I'm still considering the table could/should be still "perfectly symmetric" considering 0 crossings and the start/end points.
On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres porres@gmail.com wrote:
For the record and sake of comparison, Cyclone uses a 16384 points table, and linear interpolation, calculated with double precision. We did this because MAX documents it uses such a table, and we made it (well, Matt did) simetric.
I see Pd is doing kind of the same, huh? linear interpolation on a table calculated with double precision.
I see SuperCollider mentions it uses 8192 points and linear interpolation on its oscillator.
I guess MAX is exaggerating its table size a bit :) but I wonder why Pd is still about to use a relatively smaller table size. I'm curious to know how much an increase in table size actually offers a better resolution and how much it ruins performance. For instance, I'm using the same as Cyclone in ELSE oscillators, could I just reduce it at least to 8192 points or even less and down to Pd's 2048 size worry free?
Thanks
Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres Porres < porres@gmail.com> escreveu:
Nice one Matt!
Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi < info@christofressi.com> escreveu:
@Miller: what do you think? IMO we should make the cos table as good as
we can, so we won't have any regrets :)
+1000!!!
A couple of things:
1. I'm pretty sure any error in cos at pi and 2pi will be smaller in double precision than float's epsilon, so I don't think that there's any need to set -1.0 and 1.0 explicitly after all except to be extra safe. However, at pi/2 and 3pi/2 the error is still larger than the minimum normal number, so it is worth setting the zero crossings to 0.0.
2. For garray_dofo() there isn't a great way of using explicit 0.0 at zero crossings without incurring an extra check, like don't add to the sum if absolute value is less than e.g. 1.0e-10. For this, probably just using M_PI and incrementing integer phase like for the cosine table is enough.
MB
On Wed, Jun 5, 2024 at 2:20 PM Alexandre Torres Porres porres@gmail.com wrote:
Em qua., 5 de jun. de 2024 às 14:31, Matt Barber brbrofsvl@gmail.com escreveu:
While we're at it, I think it would be worth tuning garray_dofo() to use the same so that sinesum and cosinesum have the same level of accuracy, guarantees of symmetry, etc.
MB
Good catch! In fact, I think this is a great opportunity to also fix this bug https://github.com/pure-data/pure-data/issues/371 which is totally related. I just reopened https://github.com/pure-data/pure-data/issues/105 as well as I'm still considering the table could/should be still "perfectly symmetric" considering 0 crossings and the start/end points.
On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres porres@gmail.com wrote:
For the record and sake of comparison, Cyclone uses a 16384 points table, and linear interpolation, calculated with double precision. We did this because MAX documents it uses such a table, and we made it (well, Matt did) simetric.
I see Pd is doing kind of the same, huh? linear interpolation on a table calculated with double precision.
I see SuperCollider mentions it uses 8192 points and linear interpolation on its oscillator.
I guess MAX is exaggerating its table size a bit :) but I wonder why Pd is still about to use a relatively smaller table size. I'm curious to know how much an increase in table size actually offers a better resolution and how much it ruins performance. For instance, I'm using the same as Cyclone in ELSE oscillators, could I just reduce it at least to 8192 points or even less and down to Pd's 2048 size worry free?
Thanks
Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres Porres < porres@gmail.com> escreveu:
Nice one Matt!
Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi < info@christofressi.com> escreveu:
@Miller: what do you think? IMO we should make the cos table as good
as we can, so we won't have any regrets :)
+1000!!!
Well, as far as I can tell making the table "symmetric" won't matter at all since, for instance, 0.1 and 0.9 won't give the same lookup values anyway because they can't themselves be represented exactly and will be truncated differently (0.1 will be more accurately represented than 0.9). On the other hand, values like 0.25 or -0.5 can be represented exactly so it might be worthwhile to bash true 1s, -1,s, and 0s where they belong in the table.
Hearing that Max defaults to a ridiculously big table makes me wonder though... first, is 2048 really enough (and at what point is there a real performance penalty for bigger tables). And: not for this release but later perhaps, should 64-bit Pd use a bigger table?
As I figure it, the 2048-point table differs from the true cosine, absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7 bit accuracy. But the error is dominated by an amplitude change (the best-matching cosine to the line-segment approximation has amplitude less than 1). Accounting for that and taking RMS error instead of worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits, which is close to the accuracy of a 32-bit number.
I don't have my RPI3 handy (I'm on the road) but I'm now wondering if the default shouldn't be 4096, which would give us an additional 2 bits of goodness. Any opinions?
cheers
M
On 6/5/24 9:35 PM, Matt Barber wrote:
A couple of things:
- I'm pretty sure any error in cos at pi and 2pi will be smaller in
double precision than float's epsilon, so I don't think that there's any need to set -1.0 and 1.0 explicitly after all except to be extra safe. However, at pi/2 and 3pi/2 the error is still larger than the minimum normal number, so it is worth setting the zero crossings to 0.0.
- For garray_dofo() there isn't a great way of using explicit 0.0 at
zero crossings without incurring an extra check, like don't add to the sum if absolute value is less than e.g. 1.0e-10. For this, probably just using M_PI and incrementing integer phase like for the cosine table is enough.
MB
On Wed, Jun 5, 2024 at 2:20 PM Alexandre Torres Porres porres@gmail.com wrote:
Em qua., 5 de jun. de 2024 às 14:31, Matt Barber <brbrofsvl@gmail.com> escreveu: While we're at it, I think it would be worth tuning garray_dofo() to use the same so that sinesum and cosinesum have the same level of accuracy, guarantees of symmetry, etc. MB Good catch! In fact, I think this is a great opportunity to also fix this bug https://github.com/pure-data/pure-data/issues/371 <https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/371__;!!Mih3wA!Gx7B-gwSgjsuIXmREh2__bBbYdt1d6pi29crpkLOOyltinVweZR3u6Q6vl9ItouugFy2oefgYhPlew$> which is totally related. I just reopened https://github.com/pure-data/pure-data/issues/105 <https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/105__;!!Mih3wA!Gx7B-gwSgjsuIXmREh2__bBbYdt1d6pi29crpkLOOyltinVweZR3u6Q6vl9ItouugFy2oedw4qUPfQ$> as well as I'm still considering the table could/should be still "perfectly symmetric" considering 0 crossings and the start/end points. On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres <porres@gmail.com> wrote: For the record and sake of comparison, Cyclone uses a 16384 points table, and linear interpolation, calculated with double precision. We did this because MAX documents it uses such a table, and we made it (well, Matt did) simetric. I see Pd is doing kind of the same, huh? linear interpolation on a table calculated with double precision. I see SuperCollider mentions it uses 8192 points and linear interpolation on its oscillator. I guess MAX is exaggerating its table size a bit :) but I wonder why Pd is still about to use a relatively smaller table size. I'm curious to know how much an increase in table size actually offers a better resolution and how much it ruins performance. For instance, I'm using the same as Cyclone in ELSE oscillators, could I just reduce it at least to 8192 points or even less and down to Pd's 2048 size worry free? Thanks Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres Porres <porres@gmail.com> escreveu: Nice one Matt! Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi <info@christofressi.com> escreveu:
@Miller: what do you think? IMO we should make the cos table as good as we can, so we won't have any regrets :)
+1000!!!
Since cos~ wraps, one could theoretically take advantage of the equal distribution of float values between 1.0 and 2.0.
Profiling a larger table would be useful – I prefer accuracy over performance in general, but I wonder where the performance hit would come from, outside of unpredictable cache misses.
On Thu, Jun 6, 2024, 11:25 AM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Well, as far as I can tell making the table "symmetric" won't matter at all since, for instance, 0.1 and 0.9 won't give the same lookup values anyway because they can't themselves be represented exactly and will be truncated differently (0.1 will be more accurately represented than 0.9). On the other hand, values like 0.25 or -0.5 can be represented exactly so it might be worthwhile to bash true 1s, -1,s, and 0s where they belong in the table.
Hearing that Max defaults to a ridiculously big table makes me wonder though... first, is 2048 really enough (and at what point is there a real performance penalty for bigger tables). And: not for this release but later perhaps, should 64-bit Pd use a bigger table?
As I figure it, the 2048-point table differs from the true cosine, absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7 bit accuracy. But the error is dominated by an amplitude change (the best-matching cosine to the line-segment approximation has amplitude less than 1). Accounting for that and taking RMS error instead of worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits, which is close to the accuracy of a 32-bit number.
I don't have my RPI3 handy (I'm on the road) but I'm now wondering if the default shouldn't be 4096, which would give us an additional 2 bits of goodness. Any opinions?
cheers
M
On 6/5/24 9:35 PM, Matt Barber wrote:
A couple of things:
- I'm pretty sure any error in cos at pi and 2pi will be smaller in
double precision than float's epsilon, so I don't think that there's any need to set -1.0 and 1.0 explicitly after all except to be extra safe. However, at pi/2 and 3pi/2 the error is still larger than the minimum normal number, so it is worth setting the zero crossings to 0.0.
- For garray_dofo() there isn't a great way of using explicit 0.0 at
zero crossings without incurring an extra check, like don't add to the sum if absolute value is less than e.g. 1.0e-10. For this, probably just using M_PI and incrementing integer phase like for the cosine table is enough.
MB
On Wed, Jun 5, 2024 at 2:20 PM Alexandre Torres Porres porres@gmail.com wrote:
Em qua., 5 de jun. de 2024 às 14:31, Matt Barber <brbrofsvl@gmail.com> escreveu: While we're at it, I think it would be worth tuning garray_dofo() to use the same so that sinesum and cosinesum have the same level of accuracy, guarantees of symmetry, etc. MB Good catch! In fact, I think this is a great opportunity to also fix this bug https://github.com/pure-data/pure-data/issues/371 <
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/37...
which is totally related. I just reopened https://github.com/pure-data/pure-data/issues/105 <
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/10...
as well as I'm still considering the table could/should be still "perfectly symmetric" considering 0 crossings and the start/end points. On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres <porres@gmail.com> wrote: For the record and sake of comparison, Cyclone uses a 16384 points table, and linear interpolation, calculated with double precision. We did this because MAX documents it uses such a table, and we made it (well, Matt did) simetric. I see Pd is doing kind of the same, huh? linear interpolation on a table calculated with double precision. I see SuperCollider mentions it uses 8192 points and linear interpolation on its oscillator. I guess MAX is exaggerating its table size a bit :) but I wonder why Pd is still about to use a relatively smaller table size. I'm curious to know how much an increase in table size actually offers a better resolution and how much it ruins performance. For instance, I'm using the same as Cyclone in ELSE oscillators, could I just reduce it at least to 8192 points or even less and down to Pd's 2048 size worry free? Thanks Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres Porres <porres@gmail.com> escreveu: Nice one Matt! Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi <info@christofressi.com> escreveu:
@Miller: what do you think? IMO we should make the cos table as good as we can, so we won't have any regrets :)
+1000!!!
Precisely that: cache pollution in general. At some point the overall speed of the program will suffer, depending on CPU design, cache size, and probable other factors.
If the input to a cos~ object (for example) is between 1 and 2 you'll get the same loss of accuracy but still there will be rounding behavior that will (probably) give unsymmetric behavior.
Anyway, I don't remember hearing any reason why symmetry should be important in itself.
cheers
M
On 6/6/24 6:51 PM, Matt Barber wrote:
Since cos~ wraps, one could theoretically take advantage of the equal distribution of float values between 1.0 and 2.0.
Profiling a larger table would be useful – I prefer accuracy over performance in general, but I wonder where the performance hit would come from, outside of unpredictable cache misses.
On Thu, Jun 6, 2024, 11:25 AM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Well, as far as I can tell making the table "symmetric" won't matter at all since, for instance, 0.1 and 0.9 won't give the same lookup values anyway because they can't themselves be represented exactly and will be truncated differently (0.1 will be more accurately represented than 0.9). On the other hand, values like 0.25 or -0.5 can be represented exactly so it might be worthwhile to bash true 1s, -1,s, and 0s where they belong in the table. Hearing that Max defaults to a ridiculously big table makes me wonder though... first, is 2048 really enough (and at what point is there a real performance penalty for bigger tables). And: not for this release but later perhaps, should 64-bit Pd use a bigger table? As I figure it, the 2048-point table differs from the true cosine, absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7 bit accuracy. But the error is dominated by an amplitude change (the best-matching cosine to the line-segment approximation has amplitude less than 1). Accounting for that and taking RMS error instead of worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits, which is close to the accuracy of a 32-bit number. I don't have my RPI3 handy (I'm on the road) but I'm now wondering if the default shouldn't be 4096, which would give us an additional 2 bits of goodness. Any opinions? cheers M On 6/5/24 9:35 PM, Matt Barber wrote: > A couple of things: > > 1. I'm pretty sure any error in cos at pi and 2pi will be smaller in > double precision than float's epsilon, so I don't think that there's > any need to set -1.0 and 1.0 explicitly after all except to be extra > safe. However, at pi/2 and 3pi/2 the error is still larger than the > minimum normal number, so it is worth setting the zero crossings to 0.0. > > 2. For garray_dofo() there isn't a great way of using explicit 0.0 at > zero crossings without incurring an extra check, like don't add to the > sum if absolute value is less than e.g. 1.0e-10. For this, probably > just using M_PI and incrementing integer phase like for the cosine > table is enough. > > MB > > > On Wed, Jun 5, 2024 at 2:20 PM Alexandre Torres Porres > <porres@gmail.com> wrote: > > Em qua., 5 de jun. de 2024 às 14:31, Matt Barber > <brbrofsvl@gmail.com> escreveu: > > While we're at it, I think it would be worth tuning > garray_dofo() to use the same so that sinesum and > cosinesum have the same level of accuracy, guarantees of > symmetry, etc. > > MB > > > Good catch! In fact, I think this is a great opportunity to also > fix this bug https://github.com/pure-data/pure-data/issues/371 > <https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/371__;!!Mih3wA!Gx7B-gwSgjsuIXmREh2__bBbYdt1d6pi29crpkLOOyltinVweZR3u6Q6vl9ItouugFy2oefgYhPlew$> > which is totally related. I just reopened > https://github.com/pure-data/pure-data/issues/105 > <https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/105__;!!Mih3wA!Gx7B-gwSgjsuIXmREh2__bBbYdt1d6pi29crpkLOOyltinVweZR3u6Q6vl9ItouugFy2oedw4qUPfQ$> > as well as I'm still considering the table could/should be still > "perfectly symmetric" considering 0 crossings and the start/end > points. > > > On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres > <porres@gmail.com> wrote: > > For the record and sake of comparison, Cyclone uses > a 16384 points table, and linear interpolation, calculated > with double precision. We did this because MAX documents > it uses such a table, and we made it (well, Matt did) > simetric. > > I see Pd is doing kind of the same, huh? linear > interpolation on a table calculated with double precision. > > I see SuperCollider mentions it uses 8192 points and > linear interpolation on its oscillator. > > I guess MAX is exaggerating its table size a bit :) but I > wonder why Pd is still about to use a relatively smaller > table size. I'm curious to know how much an increase in > table size actually offers a better resolution and how > much it ruins performance. For instance, I'm using the > same as Cyclone in ELSE oscillators, could I just reduce > it at least to 8192 points or even less and down to Pd's > 2048 size worry free? > > Thanks > > > > Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres > Porres <porres@gmail.com> escreveu: > > Nice one Matt! > > Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi > <info@christofressi.com> escreveu: > >> @Miller: what do you think? IMO we should >> make the cos table as good as we can, so we >> won't have any regrets :) >> > +1000!!! >
The main reason for symmetry was stable FM synthesis – when you modulate frequency, any overall differences in the shape of the cosine wave shape accumulate quickly as an error in the osc~'s phase increment, causing significant drift in the spectrum. It's not a problem when you modulate phase directly since the modulator is decoupled from the phasor.
MB
On Thu, Jun 6, 2024, 1:24 PM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Precisely that: cache pollution in general. At some point the overall speed of the program will suffer, depending on CPU design, cache size, and probable other factors.
If the input to a cos~ object (for example) is between 1 and 2 you'll get the same loss of accuracy but still there will be rounding behavior that will (probably) give unsymmetric behavior.
Anyway, I don't remember hearing any reason why symmetry should be important in itself.
cheers
M
On 6/6/24 6:51 PM, Matt Barber wrote:
Since cos~ wraps, one could theoretically take advantage of the equal distribution of float values between 1.0 and 2.0.
Profiling a larger table would be useful – I prefer accuracy over performance in general, but I wonder where the performance hit would come from, outside of unpredictable cache misses.
On Thu, Jun 6, 2024, 11:25 AM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Well, as far as I can tell making the table "symmetric" won't matter at all since, for instance, 0.1 and 0.9 won't give the same lookup values anyway because they can't themselves be represented exactly and will be truncated differently (0.1 will be more accurately represented than 0.9). On the other hand, values like 0.25 or -0.5 can be represented exactly so it might be worthwhile to bash true 1s, -1,s, and 0s where they belong in the table. Hearing that Max defaults to a ridiculously big table makes me wonder though... first, is 2048 really enough (and at what point is there a real performance penalty for bigger tables). And: not for this release but later perhaps, should 64-bit Pd use a bigger table? As I figure it, the 2048-point table differs from the true cosine, absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7 bit accuracy. But the error is dominated by an amplitude change (the best-matching cosine to the line-segment approximation has amplitude less than 1). Accounting for that and taking RMS error instead of worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits, which is close to the accuracy of a 32-bit number. I don't have my RPI3 handy (I'm on the road) but I'm now wondering if the default shouldn't be 4096, which would give us an additional 2 bits of goodness. Any opinions? cheers M On 6/5/24 9:35 PM, Matt Barber wrote: > A couple of things: > > 1. I'm pretty sure any error in cos at pi and 2pi will be smaller in > double precision than float's epsilon, so I don't think that there's > any need to set -1.0 and 1.0 explicitly after all except to be extra > safe. However, at pi/2 and 3pi/2 the error is still larger than the > minimum normal number, so it is worth setting the zero crossings to 0.0. > > 2. For garray_dofo() there isn't a great way of using explicit 0.0 at > zero crossings without incurring an extra check, like don't add to the > sum if absolute value is less than e.g. 1.0e-10. For this, probably > just using M_PI and incrementing integer phase like for the cosine > table is enough. > > MB > > > On Wed, Jun 5, 2024 at 2:20 PM Alexandre Torres Porres > <porres@gmail.com> wrote: > > Em qua., 5 de jun. de 2024 às 14:31, Matt Barber > <brbrofsvl@gmail.com> escreveu: > > While we're at it, I think it would be worth tuning > garray_dofo() to use the same so that sinesum and > cosinesum have the same level of accuracy, guarantees of > symmetry, etc. > > MB > > > Good catch! In fact, I think this is a great opportunity to
also
> fix this bug https://github.com/pure-data/pure-data/issues/371 > <
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/37...
> which is totally related. I just reopened > https://github.com/pure-data/pure-data/issues/105 > <
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/10...
> as well as I'm still considering the table could/should be
still
> "perfectly symmetric" considering 0 crossings and the start/end > points. > > > On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres > <porres@gmail.com> wrote: > > For the record and sake of comparison, Cyclone uses > a 16384 points table, and linear interpolation, calculated > with double precision. We did this because MAX
documents
> it uses such a table, and we made it (well, Matt did) > simetric. > > I see Pd is doing kind of the same, huh? linear > interpolation on a table calculated with double precision. > > I see SuperCollider mentions it uses 8192 points and > linear interpolation on its oscillator. > > I guess MAX is exaggerating its table size a bit :) but I > wonder why Pd is still about to use a relatively
smaller
> table size. I'm curious to know how much an increase in > table size actually offers a better resolution and how > much it ruins performance. For instance, I'm using the > same as Cyclone in ELSE oscillators, could I just
reduce
> it at least to 8192 points or even less and down to
Pd's
> 2048 size worry free? > > Thanks > > > > Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres > Porres <porres@gmail.com> escreveu: > > Nice one Matt! > > Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi > <info@christofressi.com> escreveu: > >> @Miller: what do you think? IMO we should >> make the cos table as good as we can, so
we
>> won't have any regrets :) >> > +1000!!! >
Here's the demonstration. While the symmetric table will eventually drift a little, it stays stable for far longer than osc~ or cosinesum. Although, to be fair, the real test in building a cosine table from scratch in Pd would be to fill the table using [cos], walking through indices and dividing by table size to get phase.
Matt
On Thu, Jun 6, 2024 at 1:39 PM Matt Barber brbrofsvl@gmail.com wrote:
The main reason for symmetry was stable FM synthesis – when you modulate frequency, any overall differences in the shape of the cosine wave shape accumulate quickly as an error in the osc~'s phase increment, causing significant drift in the spectrum. It's not a problem when you modulate phase directly since the modulator is decoupled from the phasor.
MB
On Thu, Jun 6, 2024, 1:24 PM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Precisely that: cache pollution in general. At some point the overall speed of the program will suffer, depending on CPU design, cache size, and probable other factors.
If the input to a cos~ object (for example) is between 1 and 2 you'll get the same loss of accuracy but still there will be rounding behavior that will (probably) give unsymmetric behavior.
Anyway, I don't remember hearing any reason why symmetry should be important in itself.
cheers
M
On 6/6/24 6:51 PM, Matt Barber wrote:
Since cos~ wraps, one could theoretically take advantage of the equal distribution of float values between 1.0 and 2.0.
Profiling a larger table would be useful – I prefer accuracy over performance in general, but I wonder where the performance hit would come from, outside of unpredictable cache misses.
On Thu, Jun 6, 2024, 11:25 AM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Well, as far as I can tell making the table "symmetric" won't matter at all since, for instance, 0.1 and 0.9 won't give the same lookup values anyway because they can't themselves be represented exactly and will be truncated differently (0.1 will be more accurately represented than 0.9). On the other hand, values like 0.25 or -0.5 can be
represented
exactly so it might be worthwhile to bash true 1s, -1,s, and 0s
where
they belong in the table. Hearing that Max defaults to a ridiculously big table makes me
wonder
though... first, is 2048 really enough (and at what point is there a real performance penalty for bigger tables). And: not for this release but later perhaps, should 64-bit Pd use a bigger table? As I figure it, the 2048-point table differs from the true cosine, absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7 bit accuracy. But the error is dominated by an amplitude change
(the
best-matching cosine to the line-segment approximation has amplitude less than 1). Accounting for that and taking RMS error instead of worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits, which is close to the accuracy of a 32-bit number. I don't have my RPI3 handy (I'm on the road) but I'm now wondering
if
the default shouldn't be 4096, which would give us an additional 2 bits of goodness. Any opinions? cheers M On 6/5/24 9:35 PM, Matt Barber wrote: > A couple of things: > > 1. I'm pretty sure any error in cos at pi and 2pi will be smaller in > double precision than float's epsilon, so I don't think that there's > any need to set -1.0 and 1.0 explicitly after all except to be extra > safe. However, at pi/2 and 3pi/2 the error is still larger than
the
> minimum normal number, so it is worth setting the zero crossings to 0.0. > > 2. For garray_dofo() there isn't a great way of using explicit 0.0 at > zero crossings without incurring an extra check, like don't add to the > sum if absolute value is less than e.g. 1.0e-10. For this,
probably
> just using M_PI and incrementing integer phase like for the cosine > table is enough. > > MB > > > On Wed, Jun 5, 2024 at 2:20 PM Alexandre Torres Porres > <porres@gmail.com> wrote: > > Em qua., 5 de jun. de 2024 às 14:31, Matt Barber > <brbrofsvl@gmail.com> escreveu: > > While we're at it, I think it would be worth tuning > garray_dofo() to use the same so that sinesum and > cosinesum have the same level of accuracy, guarantees of > symmetry, etc. > > MB > > > Good catch! In fact, I think this is a great opportunity to
also
> fix this bug
https://github.com/pure-data/pure-data/issues/371
> <
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/37...
> which is totally related. I just reopened > https://github.com/pure-data/pure-data/issues/105 > <
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/10...
> as well as I'm still considering the table could/should be
still
> "perfectly symmetric" considering 0 crossings and the
start/end
> points. > > > On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres > <porres@gmail.com> wrote: > > For the record and sake of comparison, Cyclone uses > a 16384 points table, and linear interpolation, calculated > with double precision. We did this because MAX
documents
> it uses such a table, and we made it (well, Matt did) > simetric. > > I see Pd is doing kind of the same, huh? linear > interpolation on a table calculated with double precision. > > I see SuperCollider mentions it uses 8192 points and > linear interpolation on its oscillator. > > I guess MAX is exaggerating its table size a bit :) but I > wonder why Pd is still about to use a relatively
smaller
> table size. I'm curious to know how much an increase
in
> table size actually offers a better resolution and how > much it ruins performance. For instance, I'm using the > same as Cyclone in ELSE oscillators, could I just
reduce
> it at least to 8192 points or even less and down to
Pd's
> 2048 size worry free? > > Thanks > > > > Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres > Porres <porres@gmail.com> escreveu: > > Nice one Matt! > > Em qua., 5 de jun. de 2024 às 08:13, Christof
Ressi
> <info@christofressi.com> escreveu: > >> @Miller: what do you think? IMO we should >> make the cos table as good as we can, so
we
>> won't have any regrets :) >> > +1000!!! >
Yep, this one, with a one-shot [until] through the indices into $0-cos-BAD is better than cosinesum in the previous patch, but it still drifts visibly/audibly after a couple minutes; obviously wouldn't be as much of a problem in double precision, though.
On Thu, Jun 6, 2024 at 1:53 PM Matt Barber brbrofsvl@gmail.com wrote:
Here's the demonstration. While the symmetric table will eventually drift a little, it stays stable for far longer than osc~ or cosinesum. Although, to be fair, the real test in building a cosine table from scratch in Pd would be to fill the table using [cos], walking through indices and dividing by table size to get phase.
Matt
On Thu, Jun 6, 2024 at 1:39 PM Matt Barber brbrofsvl@gmail.com wrote:
The main reason for symmetry was stable FM synthesis – when you modulate frequency, any overall differences in the shape of the cosine wave shape accumulate quickly as an error in the osc~'s phase increment, causing significant drift in the spectrum. It's not a problem when you modulate phase directly since the modulator is decoupled from the phasor.
MB
On Thu, Jun 6, 2024, 1:24 PM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Precisely that: cache pollution in general. At some point the overall speed of the program will suffer, depending on CPU design, cache size, and probable other factors.
If the input to a cos~ object (for example) is between 1 and 2 you'll get the same loss of accuracy but still there will be rounding behavior that will (probably) give unsymmetric behavior.
Anyway, I don't remember hearing any reason why symmetry should be important in itself.
cheers
M
On 6/6/24 6:51 PM, Matt Barber wrote:
Since cos~ wraps, one could theoretically take advantage of the equal distribution of float values between 1.0 and 2.0.
Profiling a larger table would be useful – I prefer accuracy over performance in general, but I wonder where the performance hit would come from, outside of unpredictable cache misses.
On Thu, Jun 6, 2024, 11:25 AM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Well, as far as I can tell making the table "symmetric" won't matter at all since, for instance, 0.1 and 0.9 won't give the same lookup values anyway because they can't themselves be represented exactly and will be truncated differently (0.1 will be more accurately represented than 0.9). On the other hand, values like 0.25 or -0.5 can be
represented
exactly so it might be worthwhile to bash true 1s, -1,s, and 0s
where
they belong in the table. Hearing that Max defaults to a ridiculously big table makes me
wonder
though... first, is 2048 really enough (and at what point is there
a
real performance penalty for bigger tables). And: not for this release but later perhaps, should 64-bit Pd use a bigger table? As I figure it, the 2048-point table differs from the true cosine, absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7 bit accuracy. But the error is dominated by an amplitude change
(the
best-matching cosine to the line-segment approximation has
amplitude
less than 1). Accounting for that and taking RMS error instead of worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits, which is close to the accuracy of a 32-bit number. I don't have my RPI3 handy (I'm on the road) but I'm now wondering
if
the default shouldn't be 4096, which would give us an additional 2 bits of goodness. Any opinions? cheers M On 6/5/24 9:35 PM, Matt Barber wrote: > A couple of things: > > 1. I'm pretty sure any error in cos at pi and 2pi will be smaller in > double precision than float's epsilon, so I don't think that there's > any need to set -1.0 and 1.0 explicitly after all except to be extra > safe. However, at pi/2 and 3pi/2 the error is still larger than
the
> minimum normal number, so it is worth setting the zero crossings to 0.0. > > 2. For garray_dofo() there isn't a great way of using explicit 0.0 at > zero crossings without incurring an extra check, like don't add to the > sum if absolute value is less than e.g. 1.0e-10. For this,
probably
> just using M_PI and incrementing integer phase like for the
cosine
> table is enough. > > MB > > > On Wed, Jun 5, 2024 at 2:20 PM Alexandre Torres Porres > <porres@gmail.com> wrote: > > Em qua., 5 de jun. de 2024 às 14:31, Matt Barber > <brbrofsvl@gmail.com> escreveu: > > While we're at it, I think it would be worth tuning > garray_dofo() to use the same so that sinesum and > cosinesum have the same level of accuracy, guarantees of > symmetry, etc. > > MB > > > Good catch! In fact, I think this is a great opportunity to
also
> fix this bug
https://github.com/pure-data/pure-data/issues/371
> <
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/37...
> which is totally related. I just reopened > https://github.com/pure-data/pure-data/issues/105 > <
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/10...
> as well as I'm still considering the table could/should be
still
> "perfectly symmetric" considering 0 crossings and the
start/end
> points. > > > On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres > <porres@gmail.com> wrote: > > For the record and sake of comparison, Cyclone uses > a 16384 points table, and linear interpolation, calculated > with double precision. We did this because MAX
documents
> it uses such a table, and we made it (well, Matt did) > simetric. > > I see Pd is doing kind of the same, huh? linear > interpolation on a table calculated with double precision. > > I see SuperCollider mentions it uses 8192 points and > linear interpolation on its oscillator. > > I guess MAX is exaggerating its table size a bit :) but I > wonder why Pd is still about to use a relatively
smaller
> table size. I'm curious to know how much an increase
in
> table size actually offers a better resolution and
how
> much it ruins performance. For instance, I'm using
the
> same as Cyclone in ELSE oscillators, could I just
reduce
> it at least to 8192 points or even less and down to
Pd's
> 2048 size worry free? > > Thanks > > > > Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres > Porres <porres@gmail.com> escreveu: > > Nice one Matt! > > Em qua., 5 de jun. de 2024 às 08:13, Christof
Ressi
> <info@christofressi.com> escreveu: > >> @Miller: what do you think? IMO we
should
>> make the cos table as good as we can,
so we
>> won't have any regrets :) >> > +1000!!! >
Also, in case it wasn't clear, you should open those in 0.54 or with 0.54 compatibility. The new osc~ in 0.55 does much better.
MB
On Thu, Jun 6, 2024 at 2:04 PM Matt Barber brbrofsvl@gmail.com wrote:
Yep, this one, with a one-shot [until] through the indices into $0-cos-BAD is better than cosinesum in the previous patch, but it still drifts visibly/audibly after a couple minutes; obviously wouldn't be as much of a problem in double precision, though.
On Thu, Jun 6, 2024 at 1:53 PM Matt Barber brbrofsvl@gmail.com wrote:
Here's the demonstration. While the symmetric table will eventually drift a little, it stays stable for far longer than osc~ or cosinesum. Although, to be fair, the real test in building a cosine table from scratch in Pd would be to fill the table using [cos], walking through indices and dividing by table size to get phase.
Matt
On Thu, Jun 6, 2024 at 1:39 PM Matt Barber brbrofsvl@gmail.com wrote:
The main reason for symmetry was stable FM synthesis – when you modulate frequency, any overall differences in the shape of the cosine wave shape accumulate quickly as an error in the osc~'s phase increment, causing significant drift in the spectrum. It's not a problem when you modulate phase directly since the modulator is decoupled from the phasor.
MB
On Thu, Jun 6, 2024, 1:24 PM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Precisely that: cache pollution in general. At some point the overall speed of the program will suffer, depending on CPU design, cache size, and probable other factors.
If the input to a cos~ object (for example) is between 1 and 2 you'll get the same loss of accuracy but still there will be rounding behavior that will (probably) give unsymmetric behavior.
Anyway, I don't remember hearing any reason why symmetry should be important in itself.
cheers
M
On 6/6/24 6:51 PM, Matt Barber wrote:
Since cos~ wraps, one could theoretically take advantage of the equal distribution of float values between 1.0 and 2.0.
Profiling a larger table would be useful – I prefer accuracy over performance in general, but I wonder where the performance hit would come from, outside of unpredictable cache misses.
On Thu, Jun 6, 2024, 11:25 AM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Well, as far as I can tell making the table "symmetric" won't matter at all since, for instance, 0.1 and 0.9 won't give the same lookup values anyway because they can't themselves be represented exactly and will be truncated differently (0.1 will be more accurately represented
than
0.9). On the other hand, values like 0.25 or -0.5 can be
represented
exactly so it might be worthwhile to bash true 1s, -1,s, and 0s
where
they belong in the table. Hearing that Max defaults to a ridiculously big table makes me
wonder
though... first, is 2048 really enough (and at what point is
there a
real performance penalty for bigger tables). And: not for this release but later perhaps, should 64-bit Pd use a bigger table? As I figure it, the 2048-point table differs from the true cosine, absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7 bit accuracy. But the error is dominated by an amplitude change
(the
best-matching cosine to the line-segment approximation has
amplitude
less than 1). Accounting for that and taking RMS error instead of worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits, which is close to the accuracy of a 32-bit number. I don't have my RPI3 handy (I'm on the road) but I'm now
wondering if
the default shouldn't be 4096, which would give us an additional 2 bits of goodness. Any opinions? cheers M On 6/5/24 9:35 PM, Matt Barber wrote: > A couple of things: > > 1. I'm pretty sure any error in cos at pi and 2pi will be smaller in > double precision than float's epsilon, so I don't think that there's > any need to set -1.0 and 1.0 explicitly after all except to be extra > safe. However, at pi/2 and 3pi/2 the error is still larger than
the
> minimum normal number, so it is worth setting the zero crossings to 0.0. > > 2. For garray_dofo() there isn't a great way of using explicit 0.0 at > zero crossings without incurring an extra check, like don't add to the > sum if absolute value is less than e.g. 1.0e-10. For this,
probably
> just using M_PI and incrementing integer phase like for the
cosine
> table is enough. > > MB > > > On Wed, Jun 5, 2024 at 2:20 PM Alexandre Torres Porres > <porres@gmail.com> wrote: > > Em qua., 5 de jun. de 2024 às 14:31, Matt Barber > <brbrofsvl@gmail.com> escreveu: > > While we're at it, I think it would be worth tuning > garray_dofo() to use the same so that sinesum and > cosinesum have the same level of accuracy, guarantees of > symmetry, etc. > > MB > > > Good catch! In fact, I think this is a great opportunity to
also
> fix this bug
https://github.com/pure-data/pure-data/issues/371
> <
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/37...
> which is totally related. I just reopened > https://github.com/pure-data/pure-data/issues/105 > <
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/10...
> as well as I'm still considering the table could/should be
still
> "perfectly symmetric" considering 0 crossings and the
start/end
> points. > > > On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres > <porres@gmail.com> wrote: > > For the record and sake of comparison, Cyclone uses > a 16384 points table, and linear interpolation, calculated > with double precision. We did this because MAX
documents
> it uses such a table, and we made it (well, Matt
did)
> simetric. > > I see Pd is doing kind of the same, huh? linear > interpolation on a table calculated with double precision. > > I see SuperCollider mentions it uses 8192 points and > linear interpolation on its oscillator. > > I guess MAX is exaggerating its table size a bit :) but I > wonder why Pd is still about to use a relatively
smaller
> table size. I'm curious to know how much an
increase in
> table size actually offers a better resolution and
how
> much it ruins performance. For instance, I'm using
the
> same as Cyclone in ELSE oscillators, could I just
reduce
> it at least to 8192 points or even less and down to
Pd's
> 2048 size worry free? > > Thanks > > > > Em qua., 5 de jun. de 2024 às 13:28, Alexandre
Torres
> Porres <porres@gmail.com> escreveu: > > Nice one Matt! > > Em qua., 5 de jun. de 2024 às 08:13, Christof
Ressi
> <info@christofressi.com> escreveu: > >> @Miller: what do you think? IMO we
should
>> make the cos table as good as we can,
so we
>> won't have any regrets :) >> > +1000!!! >
Another question: why is the cos table float* and not t_float *? With Pd64 we basically throw away 29 bits of additional precision (23 bit vs. 52 bit). I assume this is done to reduce the table size for Pd64. Is 23 bit good enough for our purposes? I can imagine that the interpolation error will be much larger than the difference between 23 bit and 52 bit precision, but I didn't do the math.
Christof
On 06.06.2024 19:24, Miller Puckette wrote:
Precisely that: cache pollution in general. At some point the overall speed of the program will suffer, depending on CPU design, cache size, and probable other factors.
If the input to a cos~ object (for example) is between 1 and 2 you'll get the same loss of accuracy but still there will be rounding behavior that will (probably) give unsymmetric behavior.
Anyway, I don't remember hearing any reason why symmetry should be important in itself.
cheers
M
On 6/6/24 6:51 PM, Matt Barber wrote:
Since cos~ wraps, one could theoretically take advantage of the equal distribution of float values between 1.0 and 2.0.
Profiling a larger table would be useful – I prefer accuracy over performance in general, but I wonder where the performance hit would come from, outside of unpredictable cache misses.
On Thu, Jun 6, 2024, 11:25 AM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Well, as far as I can tell making the table "symmetric" won't matter at all since, for instance, 0.1 and 0.9 won't give the same lookup values anyway because they can't themselves be represented exactly and will be truncated differently (0.1 will be more accurately represented than 0.9). On the other hand, values like 0.25 or -0.5 can be represented exactly so it might be worthwhile to bash true 1s, -1,s, and 0s where they belong in the table.
Hearing that Max defaults to a ridiculously big table makes me wonder though... first, is 2048 really enough (and at what point is there a real performance penalty for bigger tables). And: not for this release but later perhaps, should 64-bit Pd use a bigger table?
As I figure it, the 2048-point table differs from the true cosine, absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7 bit accuracy. But the error is dominated by an amplitude change (the best-matching cosine to the line-segment approximation has amplitude less than 1). Accounting for that and taking RMS error instead of worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits, which is close to the accuracy of a 32-bit number.
I don't have my RPI3 handy (I'm on the road) but I'm now wondering if the default shouldn't be 4096, which would give us an additional 2 bits of goodness. Any opinions?
cheers
M
On 6/5/24 9:35 PM, Matt Barber wrote: > A couple of things: > > 1. I'm pretty sure any error in cos at pi and 2pi will be smaller in > double precision than float's epsilon, so I don't think that there's > any need to set -1.0 and 1.0 explicitly after all except to be extra > safe. However, at pi/2 and 3pi/2 the error is still larger than the > minimum normal number, so it is worth setting the zero crossings to 0.0. > > 2. For garray_dofo() there isn't a great way of using explicit 0.0 at > zero crossings without incurring an extra check, like don't add to the > sum if absolute value is less than e.g. 1.0e-10. For this, probably > just using M_PI and incrementing integer phase like for the cosine > table is enough. > > MB > > > On Wed, Jun 5, 2024 at 2:20 PM Alexandre Torres Porres > porres@gmail.com wrote: > > Em qua., 5 de jun. de 2024 às 14:31, Matt Barber > brbrofsvl@gmail.com escreveu: > > While we're at it, I think it would be worth tuning > garray_dofo() to use the same so that sinesum and > cosinesum have the same level of accuracy, guarantees of > symmetry, etc. > > MB > > > Good catch! In fact, I think this is a great opportunity to also > fix this bug https://github.com/pure-data/pure-data/issues/371 > https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/371__;!!Mih3wA!Gx7B-gwSgjsuIXmREh2__bBbYdt1d6pi29crpkLOOyltinVweZR3u6Q6vl9ItouugFy2oefgYhPlew$ > which is totally related. I just reopened > https://github.com/pure-data/pure-data/issues/105 > https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/105__;!!Mih3wA!Gx7B-gwSgjsuIXmREh2__bBbYdt1d6pi29crpkLOOyltinVweZR3u6Q6vl9ItouugFy2oedw4qUPfQ$ > as well as I'm still considering the table could/should be still > "perfectly symmetric" considering 0 crossings and the start/end > points. > > > On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres > porres@gmail.com wrote: > > For the record and sake of comparison, Cyclone uses > a 16384 points table, and linear interpolation, calculated > with double precision. We did this because MAX documents > it uses such a table, and we made it (well, Matt did) > simetric. > > I see Pd is doing kind of the same, huh? linear > interpolation on a table calculated with double precision. > > I see SuperCollider mentions it uses 8192 points and > linear interpolation on its oscillator. > > I guess MAX is exaggerating its table size a bit :) but I > wonder why Pd is still about to use a relatively smaller > table size. I'm curious to know how much an increase in > table size actually offers a better resolution and how > much it ruins performance. For instance, I'm using the > same as Cyclone in ELSE oscillators, could I just reduce > it at least to 8192 points or even less and down to Pd's > 2048 size worry free? > > Thanks > > > > Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres > Porres porres@gmail.com escreveu: > > Nice one Matt! > > Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi > info@christofressi.com escreveu: > >> @Miller: what do you think? IMO we should >> make the cos table as good as we can, so we >> won't have any regrets :) >> > +1000!!! >
t_float would also avoid float to double conversion, for very slightly better performance in Pd64 :)
On Fri, Jun 7, 2024, 11:28 AM Christof Ressi info@christofressi.com wrote:
Another question: why is the cos table float* and not t_float *? With Pd64 we basically throw away 29 bits of additional precision (23 bit vs. 52 bit). I assume this is done to reduce the table size for Pd64. Is 23 bit good enough for our purposes? I can imagine that the interpolation error will be much larger than the difference between 23 bit and 52 bit precision, but I didn't do the math.
Christof
On 06.06.2024 19:24, Miller Puckette wrote:
Precisely that: cache pollution in general. At some point the overall speed of the program will suffer, depending on CPU design, cache size, and probable other factors.
If the input to a cos~ object (for example) is between 1 and 2 you'll get the same loss of accuracy but still there will be rounding behavior that will (probably) give unsymmetric behavior.
Anyway, I don't remember hearing any reason why symmetry should be important in itself.
cheers
M
On 6/6/24 6:51 PM, Matt Barber wrote:
Since cos~ wraps, one could theoretically take advantage of the equal distribution of float values between 1.0 and 2.0.
Profiling a larger table would be useful – I prefer accuracy over performance in general, but I wonder where the performance hit would come from, outside of unpredictable cache misses.
On Thu, Jun 6, 2024, 11:25 AM Miller Puckette mpuckette@cloud.ucsd.edu wrote:
Well, as far as I can tell making the table "symmetric" won't matter at all since, for instance, 0.1 and 0.9 won't give the same lookup values anyway because they can't themselves be represented exactly and will be truncated differently (0.1 will be more accurately represented than 0.9). On the other hand, values like 0.25 or -0.5 can be
represented exactly so it might be worthwhile to bash true 1s, -1,s, and 0s where they belong in the table.
Hearing that Max defaults to a ridiculously big table makes me
wonder though... first, is 2048 really enough (and at what point is there a real performance penalty for bigger tables). And: not for this release but later perhaps, should 64-bit Pd use a bigger table?
As I figure it, the 2048-point table differs from the true cosine, absolute worst case, by (2pi/2048)^2 / 8, or about 2(-19.7) - i.e., 19.7 bit accuracy. But the error is dominated by an amplitude change
(the best-matching cosine to the line-segment approximation has amplitude less than 1). Accounting for that and taking RMS error instead of worst-case gives an error estimate 2.7 bits more optimistic: 22.4 bits, which is close to the accuracy of a 32-bit number.
I don't have my RPI3 handy (I'm on the road) but I'm now
wondering if the default shouldn't be 4096, which would give us an additional 2 bits of goodness. Any opinions?
cheers M On 6/5/24 9:35 PM, Matt Barber wrote: > A couple of things: > > 1. I'm pretty sure any error in cos at pi and 2pi will be smaller in > double precision than float's epsilon, so I don't think that there's > any need to set -1.0 and 1.0 explicitly after all except to be extra > safe. However, at pi/2 and 3pi/2 the error is still larger than
the > minimum normal number, so it is worth setting the zero crossings to 0.0. > > 2. For garray_dofo() there isn't a great way of using explicit 0.0 at > zero crossings without incurring an extra check, like don't add to the > sum if absolute value is less than e.g. 1.0e-10. For this, probably > just using M_PI and incrementing integer phase like for the cosine > table is enough. > > MB > > > On Wed, Jun 5, 2024 at 2:20 PM Alexandre Torres Porres > porres@gmail.com wrote: > > Em qua., 5 de jun. de 2024 às 14:31, Matt Barber > brbrofsvl@gmail.com escreveu: > > While we're at it, I think it would be worth tuning > garray_dofo() to use the same so that sinesum and > cosinesum have the same level of accuracy, guarantees of > symmetry, etc. > > MB > > > Good catch! In fact, I think this is a great opportunity to also > fix this bug
https://github.com/pure-data/pure-data/issues/371
>
<
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/37...
> which is totally related. I just reopened > https://github.com/pure-data/pure-data/issues/105 >
<
https://urldefense.com/v3/__https://github.com/pure-data/pure-data/issues/10...
> as well as I'm still considering the table could/should be
still > "perfectly symmetric" considering 0 crossings and the start/end > points. > > > On Wed, Jun 5, 2024 at 12:52 PM Alexandre Torres Porres > porres@gmail.com wrote: > > For the record and sake of comparison, Cyclone uses > a 16384 points table, and linear interpolation, calculated > with double precision. We did this because MAX documents > it uses such a table, and we made it (well, Matt did) > simetric. > > I see Pd is doing kind of the same, huh? linear > interpolation on a table calculated with double precision. > > I see SuperCollider mentions it uses 8192 points and > linear interpolation on its oscillator. > > I guess MAX is exaggerating its table size a bit :) but I > wonder why Pd is still about to use a relatively smaller > table size. I'm curious to know how much an increase in > table size actually offers a better resolution and how > much it ruins performance. For instance, I'm using the > same as Cyclone in ELSE oscillators, could I just reduce > it at least to 8192 points or even less and down to Pd's > 2048 size worry free? > > Thanks > > > > Em qua., 5 de jun. de 2024 às 13:28, Alexandre Torres > Porres porres@gmail.com escreveu: > > Nice one Matt! > > Em qua., 5 de jun. de 2024 às 08:13, Christof Ressi > info@christofressi.com escreveu: > >> @Miller: what do you think? IMO we should >> make the cos table as good as we can, so we >> won't have any regrets :) >> > +1000!!! >
P.S. I just tried on a 32-bit Raspberry Pi OS, hardware Raspberry Pi 3... no appreciable performance hit if I change COSTABSIZE to 2046 (increasing by a factor of 8).
cheers
Miller
On 5/26/24 6:19 PM, Alexandre Torres Porres wrote:
Em dom., 26 de mai. de 2024 às 11:02, Christof Ressi info@christofressi.com escreveu:
Are there any existing externals that do use cos_table? When in doubt, we could keep the old cos_table around, but deprecate it. In the future we can remove the cos_table symbol, so old externals simply won't load.
Cyclone used to have it, but we removed long ago when updating [cycle~]. Old versions still have it of course and 'nilwind' (which is basically an old version of Cyclone) still has it https://github.com/electrickery/pd-nilwind/blob/cf4f468f5585608c997c501cf305... https://urldefense.com/v3/__https://github.com/electrickery/pd-nilwind/blob/cf4f468f5585608c997c501cf3057d45fc2402b9/shared/sickle/sic.c*L69__;Iw!!Mih3wA!EUv21nqXrabdxVCU3iqtXuD70WmtWgewbGfUNbhPX3yx0dyANcb10KOrHUEG7_qvY9VQ0CFttQ$ but can be updated if the idea is to maintain an old version of cylone...
Pd-dev mailing list Pd-dev@lists.iem.at https://urldefense.com/v3/__https://lists.puredata.info/listinfo/pd-dev__;!!...
Sorry for the repeated e-mails... here's wat I now thing I should do:
- remove cos_table and COSTABLESIZE from m_pd.h
- if "compatibility" is <= 0.54, make the cosine table 512-point, otherwise 2048-point
I don't see any performance hit on any architecture although I think very old machines will be slowed down by the new table size - but then setting back compatiblity is at least a workaround.
Anyway, since we're about to see changes in Pd's audio output because of differences between MAC ARM and even other ARM implementations (I found that RPI 3 acted exactly like INTEL and differently from MAC M2) I think we're going to now see different outputs on different machines, hopefully usually small ones. So there's no more reason for me to hold on to the old table size at all.
Any objections before I pull the switch?
cheers
Miller
On 5/26/24 6:19 PM, Alexandre Torres Porres wrote:
Em dom., 26 de mai. de 2024 às 11:02, Christof Ressi info@christofressi.com escreveu:
Are there any existing externals that do use cos_table? When in doubt, we could keep the old cos_table around, but deprecate it. In the future we can remove the cos_table symbol, so old externals simply won't load.
Cyclone used to have it, but we removed long ago when updating [cycle~]. Old versions still have it of course and 'nilwind' (which is basically an old version of Cyclone) still has it https://github.com/electrickery/pd-nilwind/blob/cf4f468f5585608c997c501cf305... https://urldefense.com/v3/__https://github.com/electrickery/pd-nilwind/blob/cf4f468f5585608c997c501cf3057d45fc2402b9/shared/sickle/sic.c*L69__;Iw!!Mih3wA!EUv21nqXrabdxVCU3iqtXuD70WmtWgewbGfUNbhPX3yx0dyANcb10KOrHUEG7_qvY9VQ0CFttQ$ but can be updated if the idea is to maintain an old version of cylone...
Pd-dev mailing list Pd-dev@lists.iem.at https://urldefense.com/v3/__https://lists.puredata.info/listinfo/pd-dev__;!!...
Sounds good to me!
Just to be sure, you will also try to make the new table symmetric, right? See https://github.com/pure-data/pure-data/pull/106/files.
On 27.05.2024 12:20, Miller Puckette wrote:
Sorry for the repeated e-mails... here's wat I now thing I should do:
remove cos_table and COSTABLESIZE from m_pd.h
if "compatibility" is <= 0.54, make the cosine table 512-point,
otherwise 2048-point
I don't see any performance hit on any architecture although I think very old machines will be slowed down by the new table size - but then setting back compatiblity is at least a workaround.
Anyway, since we're about to see changes in Pd's audio output because of differences between MAC ARM and even other ARM implementations (I found that RPI 3 acted exactly like INTEL and differently from MAC M2) I think we're going to now see different outputs on different machines, hopefully usually small ones. So there's no more reason for me to hold on to the old table size at all.
Any objections before I pull the switch?
cheers
Miller
On 5/26/24 6:19 PM, Alexandre Torres Porres wrote:
Em dom., 26 de mai. de 2024 às 11:02, Christof Ressi info@christofressi.com escreveu:
Are there any existing externals that do use cos_table? When in doubt, we could keep the old cos_table around, but deprecate it. In the future we can remove the cos_table symbol, so old externals simply won't load.
Cyclone used to have it, but we removed long ago when updating [cycle~]. Old versions still have it of course and 'nilwind' (which is basically an old version of Cyclone) still has it https://github.com/electrickery/pd-nilwind/blob/cf4f468f5585608c997c501cf305... https://urldefense.com/v3/__https://github.com/electrickery/pd-nilwind/blob/cf4f468f5585608c997c501cf3057d45fc2402b9/shared/sickle/sic.c*L69__;Iw!!Mih3wA!EUv21nqXrabdxVCU3iqtXuD70WmtWgewbGfUNbhPX3yx0dyANcb10KOrHUEG7_qvY9VQ0CFttQ$ but can be updated if the idea is to maintain an old version of cylone...
Pd-dev mailing list Pd-dev@lists.iem.at https://urldefense.com/v3/__https://lists.puredata.info/listinfo/pd-dev__;!!...
Pd-dev mailing list Pd-dev@lists.iem.at https://lists.puredata.info/listinfo/pd-dev
I don't very much like that code - but OTOH I'm thinking to use double precision to compute the table this time around, so whatever imprecision there is should be minimal :)
M
On 5/27/24 12:25 PM, Christof Ressi wrote:
Sounds good to me!
Just to be sure, you will also try to make the new table symmetric, right? See https://urldefense.com/v3/__https://github.com/pure-data/pure-data/pull/106/... .
On 27.05.2024 12:31, Miller Puckette wrote:
I don't very much like that code -
Me neither, just wanted to address the issue.
but OTOH I'm thinking to use double precision to compute the table this time around, so whatever imprecision there is should be minimal :)
Also, instead of accumulating the phase you should calculate it for every point:
#ifndef M_PI #define M_PI 3.14159265358979323846264338327950288 #endif
...
for (int i = 0; i < COSTABSIZE + 1; i++) fp[i] = cos(2.0 * M_PI * i / (double)COSTABSIZE);
This should ensure that the table is symmetric, unless the underlying cos() function is broken :)
Christof
M
On 5/27/24 12:25 PM, Christof Ressi wrote:
Sounds good to me!
Just to be sure, you will also try to make the new table symmetric, right? See https://urldefense.com/v3/__https://github.com/pure-data/pure-data/pull/106/... .
Pd-dev mailing list Pd-dev@lists.iem.at https://lists.puredata.info/listinfo/pd-dev
Hi,
On 26/05/2024 10:16, Miller Puckette wrote:
OK, so in a few years every new PC will probably have ARM or other RISC architecture.
I just made the interesting discovery that, on Mac ARMs, there is a difference, probably a slight decrease, in numerical accuracy in Pd's DSP objects. So we've lost exact compatibliity and only have pretty-good-approximate compatibility. On one piece I tested the divergence is about -100 dB relative to maximum amplitude and in another, which I think has unstable feedback paths, the results are further off.
I debugged something similar: https://post.lurk.org/@mathr/110760066818211577
TL;DR compile with -ffp-contract=off
Claude