Greetings,
I have read various mentions to using SIMD instructions to speed up externals, and I have a few questions about this... 1. Where can I find the header files for this? 2. Are the instructions specific to intel hardware and if so is there an equivalent for Mac OS X? (I read something about altivec instructions ... ?)
Best, Ed
-> -> --> ---> -----> --------> -------------> r3search + praktik EK5perimenz
___________________________________________________________ Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com
Hallo!
I have read various mentions to using SIMD instructions to speed up externals, and I have a few questions about this...
- Where can I find the header files for this?
They are in the pd_devel branch ...
- Are the instructions specific to intel hardware and
if so is there an equivalent for Mac OS X? (I read something about altivec instructions ... ?)
I think Thomas implemented most of them also for osx ...
LG Georg
Hello,
Thanks for this! Are there any externs available that use these instructions?
Best, Ed
Georg Holzmann grhPD@gmx.at wrote: Hallo!
I have read various mentions to using SIMD instructions to speed up externals, and I have a few questions about this...
- Where can I find the header files for this?
They are in the pd_devel branch ...
- Are the instructions specific to intel hardware and
if so is there an equivalent for Mac OS X? (I read something about altivec instructions ... ?)
I think Thomas implemented most of them also for osx ...
LG Georg
-> -> --> ---> -----> --------> -------------> r3search + praktik EK5perimenz --------------------------------- To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre.
Thanks for this! Are there any externs available that use these instructions?
my volctl~ external is using simd instructions, and the latest cvs of zexy ...
tim
Thanks again. The reason I am looking at this is because I'm doing a lot of work with autocorrelation functions and would really like to find ways of speeding them up. I'm using nested for loops, but maybe some form of matrix math is more appropriate. I've never studied maths in so much depth, but if anyone can point me to a way of making fast realtime versions of ACFs I would be extremely grateful! (and we will have a new external).
Best, Ed
PS this is why my voicing_detector~ is so hungry!!! Tim Blechmann TimBlechmann@gmx.net wrote: > Thanks for this! Are there any externs available that use these
instructions?
my volctl~ external is using simd instructions, and the latest cvs of zexy ...
tim
On Tue, 17 Jan 2006, Ed Kelly wrote:
Thanks again. The reason I am looking at this is because I'm doing a lot of work with autocorrelation functions and would really like to find ways of speeding them up. I'm using nested for loops, but maybe some form of matrix math is more appropriate. I've never studied maths in so much depth, but if anyone can point me to a way of making fast realtime versions of ACFs I would be extremely grateful! (and we will have a new external).
You need to use some kind of FFT in order to benefit from the fantastic speed increase possible with the Convolution Theorem.
the trick is: fourier(x convol y) = fourier(x) * fourier(y)
and then this formula wouldn't get things any faster if it weren't for the FFT. a Fourier Transform itself is a kind of convolution so naïvely it takes n*n multiplications but it's very special because it's optimisable and so the Fast (FFT) version takes only n*log2(n) multiplications.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
You need to use some kind of FFT in order to benefit from the fantastic speed increase possible with the Convolution Theorem. Right. A*B= fourier AxB, or cepstral A+B. I know a bit about Fourier theory, real and imaginary and radix-2 etc optimisations although I have never implemented one myself (no need). Some form of radix optimization is exactly what I suggested to Nicolas (Chetry) with whom I wrote the amdf-based voicing_detector~ , but he's writing his PhD now and has no time for this. What I am wondering is how to turn any form of autocorrelation matrix into a radix-type algorithm, so that code such as the two examples enclosed can be optimized.
...maybe I should take another degree? if only!
Ed
-> -> --> ---> -----> --------> -------------> (insert quote here) --------------------------------- To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre.
for (i=1;i<l;i++) /* the Average Magnitude Difference Function */ { for (j=start;j<=end;j++) { temp[j] = i + j < l ? in[i+j] : 0.0; temp0 = atom_getfloatarg(i, 4096, ctl->otemp); temp0 += i == 0 ? 0.0 : fabs(in[j] - temp[j]); } temp0 += ((float)i / (float)l) * ctl->f_sum_abs; SETFLOAT(&ctl->otemp[i], temp0); }
/* compute autocorrelations */ for (i=0;i<=order;i++) { sum = 0; for (k=0;k<(n-i);k++) sum += x[k] * x[k+i]; r[i] = sum; }
/* compute predictor coefficients */ if (r[0] == 0) /* no signal ! */ retcode = 1; else { *pe = r[0]; pc[0] = 1.0; for (k=1;k<=order;k++) { sum = 0; for (i=1;i<=k;i++) sum -= pc[k-i] * r[i]; akk = sum/(*pe); /* new predictor coefficients */ pc[k] = akk; for (i=1;i<=(k/2);i++) { ai = pc[i]; aj = pc[k-i]; pc[i] = ai + akk * aj; pc[k-i] = aj + akk * ai; } /* new prediction error */ *pe = *pe * (1.0 - akk*akk); if (*pe <= 0) /* negative/zero error ! */ retcode = 2; } }
On Tue, 17 Jan 2006, Ed Kelly wrote:
You need to use some kind of FFT in order to benefit from the fantastic speed increase possible with the Convolution Theorem. Right. A*B= fourier AxB, or cepstral A+B. I know a bit about Fourier theory, real and imaginary and radix-2 etc optimisations although I have never implemented one myself (no need).
I'd rather not use the letter "x" to mean multiplication and not use "*" to mean convolution. I'd use "*" and "conv" respectively.
The radix-2 optimisations of FFT are generalizable to any radix. When it is said that FFT runs in O(n log n) time, what's really meant by log n is more like, the sum of all prime factors of n. I'm currently thinking of non-power-of-2 FFT's because I'd like to apply it in the spatial domain for images that don't have power-of-2 sizes, without having to crop or scale each picture. I'm not sure how much it would be worth it.
What I am wondering is how to turn any form of autocorrelation matrix into a radix-type algorithm, so that code such as the two examples enclosed can be optimized.
temp0 += i == 0 ? 0.0 : fabs(in[j] - temp[j]);
Note that the value of i can't be 0, so it's simplifiable to:
temp0 += fabs(in[j] - temp[j]);
Then what's the purpose of the temp array? if each value is set only once and read immediately, you don't need it.
In any case, AMDF doesn't seem to be FFT-optimisable. Especially, the absolute value is difficult to get rid of. That doesn't mean that there isn't a O(n log n) trick to compute it though.
I don't understand your AMDF algorithm. Why do you overwrite temp0 in the inner loop, discarding all previous accumulations of fabs ???
In the other algorithm, the autocorrelation section is highly optimisable. If I'm not too confused, correlation is like convolution with with a mirror image (in the time domain) of its right-hand function. Note that the double-fft of a signalblock is the mirror image of the signalblock, and that the triple-fft is the same as the inverse-fft (supposing a normalized fft: Pd's fft is not, so extra constants have to be introduced in order to compensate). Thus the autocorrelation of x(t) and y(t) is ifft(fft(x)*ifft(y)) where * is pointwise complex multiplication.
However I don't understand the "predictor" portion of the algorithm and I know that it is running in O(n*n) time. This doesn't mean that optimising the autocorrelation is useless: you could still get a (maybe) 50% speedup of the whole.
...maybe I should take another degree? if only!
A degree in what exactly???... I'd rather use math books than math profs.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
On Thu, 19 Jan 2006, Mathieu Bouchard wrote:
The radix-2 optimisations of FFT are generalizable to any radix. When it is said that FFT runs in O(n log n) time, what's really meant by log n is more like, the sum of all prime factors of n.
Actually, that's not exactly true... there's a newer FFT algorithm that can do O(n log n) for prime sizes as well, but it's a slower O(n log n) than the one that involves small prime factors. libfftw supports both algorithms. (Pd's builtin FFT supports only power-of-two AFAIK).
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
I don't understand your AMDF algorithm. Why do you overwrite temp0 in the inner loop, discarding all previous accumulations of fabs ???
Whoops! After I posted that email I noticed the error. The bizarre thing is that the voicing_detector~ seems to work anyway! Maybe it will work better with the bugfix. A SIMD version is in development...
In any case, AMDF doesn't seem to be FFT-optimisable. Especially, the absolute value is difficult to get rid of. That doesn't mean that there isn't a O(n log n) trick to compute it though.
That's what I meant, and especially since this seems to be the slowest procedure of the external code. The formant estimator I'm porting has a similar autocorrelation algorithm, so what I'm really asking is whether it is possible to make a generalized radix autocorrelation object/code.
A degree in what exactly???... I'd rather use math books than math profs.
British self-deprecating humour for "I don't know enough about this shit, and sorry if my questions are a bit dumb" - but you're right, I need more books than just Kernighan and Ritchie for this...
Best, Ed
-> -> --> ---> -----> --------> -------------> (insert quote here)
___________________________________________________________ Yahoo! Photos  NEW, now offering a quality print service from just 8p a photo http://uk.photos.yahoo.com
On Fri, 20 Jan 2006, Ed Kelly wrote:
I don't understand your AMDF algorithm. Why do you overwrite temp0 in the inner loop, discarding all previous accumulations of fabs ???
Whoops! After I posted that email I noticed the error. The bizarre thing is that the voicing_detector~ seems to work anyway! Maybe it will work better with the bugfix. A SIMD version is in development...
But can you post the correct code?
If I guess correctly it's supposed to take the average distance between each possible pair of two distinct points? Then I have a O(n log n) algorithm for that, which has nothing to do with FFT.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
But can you post the correct code?
Sorry Matju, the correct code is now posted on cvs and of course works better than it did.If I guess correctly it's supposed to take the average distance between each possible pair of two distinct points? Then I have a O(n log n) algorithm for that, which has nothing to do with FFT. Yes, that's the idea. I would be very grateful to have a look at that.
Best, Ed
-> -> --> ---> -----> --------> -------------> (insert quote here) --------------------------------- Yahoo! Cars NEW - sell your car and browse thousands of new and used cars online search now ---------------------------------
On Fri, 20 Jan 2006, Ed Kelly wrote:
If I guess correctly it's supposed to take the average distance between each possible pair of two distinct points? Then I have a O(n log n) algorithm for that, which has nothing to do with FFT.
Yes, that's the idea. I would be very grateful to have a look at that.
Just sort your floats. Then every number will be greater than each of the values that precede it. Then you need not fabs(). Then because fabs() is gone, summing in[i]-in[k] for 0<=i<l and 0<=k<i is optimisable from O(n*n) to O(n):
float pred=0.0; float total=0.0; for (int i=1; i<l; i++) { total += pred - in[i]*i; pred += in[i]; }
Is that right?
However I don't know how that interacts with your "start" and "end" bounds that you use for variable j in your code...
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
Hey all,
So, I am wondering how to use the SIMD instructions. I see t_int *plus_perf_simd(t_int *w); in m_simd_sse_gcc.h and I am wondering how to use it. For example, I have my signal vector in an array and I want to find the sum of absolutes of all it's values. Without SIMD I would do a
for (i=0;i<n;i++) ctl->f_sum_abs += fabs(in[i]);
but looking at the headers for SIMD instructions, it seems I have to set up a dsp-perform routine (or something like it) for the plus_perf_simd instruction (with *w as the pointer). How do I do this from within the perform routine, or am I missing something here? I apologise for my lack of knowledge...
Best, Ed
___________________________________________________________ To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
Hallo!
but looking at the headers for SIMD instructions, it seems I have to set up a dsp-perform routine (or something like it) for the plus_perf_simd instruction
look at e.g. volctl~ by Tim: you have to make one perform routine without simd (volctl_perform) and one for simd (volctl_perf_simd)
Then in the volctl_dsp you look (at runtime) if the processor can make SSE:
if(SIMD_CHECK2(n,sp[0]->s_vec,sp[1]->s_vec)) dsp_add(volctl_perf_simd, 4, x, sp[0]->s_vec, sp[1]->s_vec, n);
if so, you can use the simd perform routine ...
LG Georg
Ed Kelly wrote:
the perform routine, or am I missing something here? I apologise for my lack of knowledge...
well, you have to write simd yourself! i think this was unclear from previous posts.
in pd_devel several often used functions (like adding 2 dsp-vectors) are implemented in SIMD. if you don't want to do that (add 2 dsp-vectors) then you will have to write SIMD-code yourself (ah yes, i am repeating repeating).
however, you might get a glimpse of how it is done.
for obvious reasons i (once again) would suggest to rather have a look at zexy how you would write SSE-code in an external. (e.g. look at the implementation of abs~ in the CVS-zexy)
while the tastes differ, i have found it most convenient to write SIMD-code not in assembler but rather in intrinsics (C-like function-wrappers around assembler): apart from taste this has also the benefit of being portable across different compilers (e.g. you can use the same implementation for both gcc and icc and m$vc).
note however, that you have it doesn't create code that is portable across different architectures (you need different implementations for SSE and AltiVec, even though you compile both with gcc)
if you want to write code that is both portable across compilers and architectures, you should use C ;-)
mfa.sdr IOhannes
Thanks, these are my own perrsonal "bootstraps" for me to start learning what SSE is all about!
Most, Ed
--- IOhannes m zmoelnig zmoelnig@iem.at wrote:
Ed Kelly wrote:
the perform routine, or am I missing something
here? I
apologise for my lack of knowledge...
well, you have to write simd yourself! i think this was unclear from previous posts.
in pd_devel several often used functions (like adding 2 dsp-vectors) are implemented in SIMD. if you don't want to do that (add 2 dsp-vectors) then you will have to write SIMD-code yourself (ah yes, i am repeating repeating).
however, you might get a glimpse of how it is done.
for obvious reasons i (once again) would suggest to rather have a look at zexy how you would write SSE-code in an external. (e.g. look at the implementation of abs~ in the CVS-zexy)
while the tastes differ, i have found it most convenient to write SIMD-code not in assembler but rather in intrinsics (C-like function-wrappers around assembler): apart from taste this has also the benefit of being portable across different compilers (e.g. you can use the same implementation for both gcc and icc and m$vc).
note however, that you have it doesn't create code that is portable across different architectures (you need different implementations for SSE and AltiVec, even though you compile both with gcc)
if you want to write code that is both portable across compilers and architectures, you should use C ;-)
mfa.sdr IOhannes
-> -> --> ---> -----> --------> -------------> (insert quote here)
___________________________________________________________ NEW Yahoo! Cars - sell your car and browse thousands of new and used cars online! http://uk.cars.yahoo.com/
On Wed, 18 Jan 2006 14:58:16 +0100 IOhannes m zmoelnig zmoelnig@iem.at wrote:
while the tastes differ, i have found it most convenient to write SIMD-code not in assembler but rather in intrinsics (C-like function-wrappers around assembler): apart from taste this has also the benefit of being portable across different compilers (e.g. you can use the same implementation for both gcc and icc and m$vc).
after implementing a lot of stuff both with intrinsics and with inline assembler, i have to say, that inline assembler can be the faster possibility, and i'd try to formulate every algorithm with inline assembler first to avoid register problems, but intrinsics have the advantage of being compiler/architecture independent ... some of my assembler code doesn't work on the x86_64 architecture, because of different implementation of pointer arithmetics ...
on the other hand, intrinsics do not produce the optimal code (at least gcc and icc don't) ... still faster than plain c ...
tim