Hi all, i'd like to join in as i'm in the need of an FFT algorithm with the following properties:
Alternatively, as FFTW exists (but doesn't fulfill the 2nd requirement):
separated real and imaginary vectors in place?
many thanks Thomas
-----Ursprüngliche Nachricht----- Von: Orm Finnendahl [mailto:finnendahl@folkwang-hochschule.de] Gesendet: Mittwoch, 10. April 2002 10:53 An: pd-list@iem.kug.ac.at Betreff: [PD] fft algorithms
Hi Miller,
I was looking around for fft algorithms I might want to use in a project and found many different applications with different benchmarks. Looking into the sources I found that pd seems to use a method implementing a network-like approach from Kevin Peterson from '86. As I didn't find that method on any of the benchmark pages could you comment on the effectivity? On the mentioned pages it seems there have been made some achievements regarding algorithms and speed in the 90's. But that could be wrong in pd's case, or, if there are faster algorithms, a reason for your selection of the algorithm could be related to the overall architecture of pd.
I'm just asking because my project will probably contain some heavy fft processing and I'm trying to find out whether it's necessary to implement other fft routines than the existing ones to minimize cpu-load.
Yours, Orm
Hi,
I found another fft algorithm in the sources (mayer-buneman with hartley transform), which I found in benchmark tests from 1993
(http://www.geocities.com/ResearchTriangle/8869/1993_fft_summary.html)
being ahead for vectors of size 64. I presume that is the one pd is using for dsp computing or am I wrong? What's the other one for? How does it compare to fftw? In another benchmark test
(http://www.fftw.org/benchfft/results/pii-300.html)
this one seems to be faster, but those tests can be deceiptive.
Orm
Pd uses the "mayer" FFT, which seems to be the easiest to maintain of any FFT code I've seen, but there are rumors of other packages running much faster...
cheers Miller
On Wed, Apr 10, 2002 at 11:37:58AM +0200, Orm Finnendahl wrote:
Hi,
I found another fft algorithm in the sources (mayer-buneman with hartley transform), which I found in benchmark tests from 1993
(http://www.geocities.com/ResearchTriangle/8869/1993_fft_summary.html)
being ahead for vectors of size 64. I presume that is the one pd is using for dsp computing or am I wrong? What's the other one for? How does it compare to fftw? In another benchmark test
(http://www.fftw.org/benchfft/results/pii-300.html)
this one seems to be faster, but those tests can be deceiptive.
Orm
hi Thomas,
it looks like your 2nd requirement is met for rfftw:
``FFTW's one-dimensional real transforms store hermitian arrays as "halfcomplex" arrays. A halfcomplex array of size n is a one-dimensional array of n `fftw_real' numbers. A hermitian array X in stored into a halfcomplex array Y as follows. For all integers i such that 0 <= i <= n / 2, we have Y[i] = Re(X[i]). For all integers i such that 0 < i < n / 2, we have Y[n-i] = Im(X[i]).''
The actual `real2hc' codelets use separate real and imag arrays.
Krzysztof
Thomas Grill wrote: ...
following properties:
- in-place (or with a small workspace)
- separated vectors for real and imaginary data, with stride factors
- preferably also mixed-radix (for non-power-2 transforms)
Alternatively, as FFTW exists (but doesn't fulfill the 2nd requirement):
- is there an algorithm that transforms a vector of complex points into
separated real and imaginary vectors in place?