This looks interesting: http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html
-Jonathan
Considering the paper is unpublished and sparse decomposition is a pretty heavy topic I thought that is a really nice bit of science journalism by Larry Hardesty. Since periodic music signals probably fit the bill quite well it's good news for our kind of work.
On Fri, 20 Jan 2012 09:40:35 -0800 (PST) Jonathan Wilkes jancsika@yahoo.com wrote:
This looks interesting: http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html
-Jonathan
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Yeah, seems like very good news for a number of media topics. Apparently it should apply well to media compression. Anyone have any idea before we'll see this stuff in a usable library?
.hc
On Jan 20, 2012, at 6:55 PM, Andy Farnell wrote:
Considering the paper is unpublished and sparse decomposition is a pretty heavy topic I thought that is a really nice bit of science journalism by Larry Hardesty. Since periodic music signals probably fit the bill quite well it's good news for our kind of work.
On Fri, 20 Jan 2012 09:40:35 -0800 (PST) Jonathan Wilkes jancsika@yahoo.com wrote:
This looks interesting: http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html
-Jonathan
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
-- Andy Farnell padawan12@obiwannabe.co.uk
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Terrorism is not an enemy. It cannot be defeated. It's a tactic. It's about as sensible to say we declare war on night attacks and expect we're going to win that war. We're not going to win the war on terrorism. - retired U.S. Army general, William Odom
Le 2012-01-20 à 23:55:00, Andy Farnell a écrit :
Considering the paper is unpublished and sparse decomposition is a pretty heavy topic I thought that is a really nice bit of science journalism by Larry Hardesty. Since periodic music signals probably fit the bill quite well it's good news for our kind of work.
Does it have better upper bounds ?
If not, it doesn't change much for realtime.
E.g. if I feed something somewhat like a white noise to this FFT, what exactly will it try skipping ? Every harmonic will be fairly nonzero.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
quote them: "A recording, on the other hand, of all possible instruments each playing all possible notes at once wouldn’t be sparse — but neither would it be a signal that anyone cares about."
If I didn't get it wrong: I think they mention the "premise" is the opposite of white noise, thus signals do not exhibit the behavior of white noise (spread out through all spectrum). Once again, its just a tool, and tools are useful for certain cases, and not useful for other cases.
Like the hammer.
Best, pedro
On Wed, Jan 25, 2012 at 11:03 PM, Mathieu Bouchard matju@artengine.cawrote:
Le 2012-01-20 à 23:55:00, Andy Farnell a écrit :
Considering the paper is unpublished and sparse decomposition is a pretty
heavy topic I thought that is a really nice bit of science journalism by Larry Hardesty. Since periodic music signals probably fit the bill quite well it's good news for our kind of work.
Does it have better upper bounds ?
If not, it doesn't change much for realtime.
E.g. if I feed something somewhat like a white noise to this FFT, what exactly will it try skipping ? Every harmonic will be fairly nonzero.
______________________________**______________________________** __________ | Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC _______________________________________________ Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Le 2012-01-25 à 23:10:00, Pedro Lopes a écrit :
If I didn't get it wrong: I think they mention the "premise" is the opposite of white noise, thus signals do not exhibit the behavior of white noise (spread out through all spectrum).
I'm just stating a worst case. No one would bother analysing a white noise. But if I analyse speech that contains shhh sounds, or if I analyse music that has a snare drum, the analyser might have trouble. Even if I play a chord on a fuzz guitar, as fuzz greatly increases the number of harmonics, especially if the pre-fuzz signal has low freqs. There are plenty of interesting signals that contain lots of harmonics.
Once again, its just a tool, and tools are useful for certain cases, and not useful for other cases. Like the hammer.
The hammer is a great analyser. It helps take things apart. I can use it to analyse sounds. It's easy, I just have to threaten a music student with it, and the music student will then produce the desired output.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
On Wed, Jan 25, 2012 at 4:03 PM, Mathieu Bouchard matju@artengine.ca wrote:
Le 2012-01-20 à 23:55:00, Andy Farnell a écrit :
Considering the paper is unpublished and sparse decomposition is a pretty heavy topic I thought that is a really nice bit of science journalism by Larry Hardesty. Since periodic music signals probably fit the bill quite well it's good news for our kind of work.
Does it have better upper bounds ?
If not, it doesn't change much for realtime.
E.g. if I feed something somewhat like a white noise to this FFT, what exactly will it try skipping ? Every harmonic will be fairly nonzero.
I've wondered about that too-from a perceptual point of view. There's not much differentiation among noise signals. They all mostly sound the same
Even though there's a very large number of dimensions for variation, they must have a sparse representation somewhere (in the brains... brains...)
but the FFT? no way...
Le 2012-01-25 à 16:16:00, Charles Henry a écrit :
I've wondered about that too-from a perceptual point of view. There's not much differentiation among noise signals. They all mostly sound the same
Well, the brain still has to extract enough info from them to have a general idea of what kind of noise it is (a rough shape of spectrum), and ESPECIALLY extract the non-noise portions of the spectrum.
Even though there's a very large number of dimensions for variation, they must have a sparse representation somewhere (in the brains... brains...)
but the FFT? no way...
Well, there are perceptual models that try to separate «noise» from «non-noise» for easier encoding and stuff...
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
I read about this in New Scientist. It seemed interesting at first, but I am fairly sure that it is quite specific in its applications. It reminds me of the mpeg 1 layer 3 approach, which is to "psychoacoustically" encode the audio with less data by removing parts of the signal we "don't hear". I think the application (telecoms) is more concerned with locating free radio space than accessing partials in a better way, although I may be wrong. I'm sure it will have audio applications eventually...
Still the problem with any window-based FFT is that we have to get enough points (e.g. 512, 1024) before we can do the analysis, so there is always a delay (44100/1024 = ~43ms) between live input and output (latency). Much more interesting is the sliding phase vocoder (Russell Bradford, Richard Dobson, and John ffitch, 2005) where the FFT is adjusted in each sample rather than each frame, and the latency is significantly reduced. They also told me it is capable of things like FM synthesis with a live input as the carrier, although I haven't heard any examples of this (yet).
Ed
Reference:dream.cs.bath.ac.uk/Papers/spv-icmc2007.pdf
Gemnotes-0.1alpha: Live music notation for Pure Data http://sharktracks.co.uk/
From: Mathieu Bouchard matju@artengine.ca To: Charles Henry czhenry@gmail.com Cc: pd-list@iem.at Sent: Wednesday, 25 January 2012, 22:25 Subject: Re: [PD] OT: faster than Fourier transform
Le 2012-01-25 à 16:16:00, Charles Henry a écrit :
I've wondered about that too-from a perceptual point of view. There's not much differentiation among noise signals. They all mostly sound the same
Well, the brain still has to extract enough info from them to have a general idea of what kind of noise it is (a rough shape of spectrum), and ESPECIALLY extract the non-noise portions of the spectrum.
Even though there's a very large number of dimensions for variation, they must have a sparse representation somewhere (in the brains... brains...)
but the FFT? no way...
Well, there are perceptual models that try to separate «noise» from «non-noise» for easier encoding and stuff...
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC _______________________________________________ Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Le 2012-02-02 à 02:36:00, Ed Kelly a écrit :
Still the problem with any window-based FFT is that we have to get enough points (e.g. 512, 1024) before we can do the analysis, so there is always a delay (44100/1024 = ~43ms) between live input and output (latency).
Heisenberg's Uncertainty Principle (in its acoustic version) is a hard theoretical limit, meaning it can't possibly be violated. All you can do is separate the analysis of frequencies so that you get frequent results for treble vs less frequent results for bass. This comes at the expense of having to customise, complicate and/or slow down FFT to make it do what you want.
Much more interesting is the sliding phase vocoder (Russell Bradford, Richard Dobson, and John ffitch, 2005) where the FFT is adjusted in each sample rather than each frame, and the latency is significantly reduced.
What's the speed of the sliding DFT ?
Note that they call it «DFT», and not «FFT», and that's presumably because it doesn't have to do much with FFT... whereas DFT is a more generic term for whatever computes harmonics on blocks of discrete signals.
The access to the sliding DFT article is reserved to IEEE members...
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
On Thu, Feb 2, 2012 at 3:07 PM, Mathieu Bouchard matju@artengine.ca wrote:
Le 2012-02-02 à 02:36:00, Ed Kelly a écrit :
Still the problem with any window-based FFT is that we have to get enough points (e.g. 512, 1024) before we can do the analysis, so there is always a delay (44100/1024 = ~43ms) between live input and output (latency).
Heisenberg's Uncertainty Principle (in its acoustic version) is a hard theoretical limit, meaning it can't possibly be violated.
As I understand it, it's just a sheer mathematical fact expressing the trade-off between the std deviation of a distribution in the time domain and the std deviation of the fourier transform of that distribution.
But then you apply it to the Shrodinger equation (a differential equation involving position and velocity) and you get the most famous corollary in quantum.
I agree that it's a hard limit, but it's really just a math problem to begin with :)
All you can do is separate the analysis of frequencies so that you get frequent results for treble vs less frequent results for bass. This comes at the expense of having to customise, complicate and/or slow down FFT to make it do what you want.
Much more interesting is the sliding phase vocoder (Russell Bradford, Richard Dobson, and John ffitch, 2005) where the FFT is adjusted in each sample rather than each frame, and the latency is significantly reduced.
What's the speed of the sliding DFT ?
I can think of an O(n) algorithm for it (per sample). I'd be surprised if there's anything better. You'd still have to do an FFT to begin with, and after shifting by m samples, you'd be in O(m*n) territory.
Assume you've got a signal s on [0,N-1] and you have already computed its DFT plus this vector F1={e^(2*pi*i* k/N}, k=0,...,N-1
The next DFT to calculate is on [1,N]. First subtract out the contribution of s(0)--this is easy because the DFT of {s(0),0,0,..} is {s(0),s(0),s(0),...}
Next, the values in the resulting vector are multiplied by F1. This has the effect of shifting the time domain signal to the left by 1 sample.
Then, add in the effect of the new sample, F1*s(n).
I see some ways those steps could be made simpler. Lump the 1st and 3rd steps together and just add s(n)-s(0) to all the DFT values, then multiply once by F1 (performing a circular shift in the time domain). On the next iteration, use s(n+1)-s(1). So, the total cost is a buffer of N samples, a complex vector of N DFT values (F1)---N+1 additions/subtractions, and N complex multiply operations (each is 4 multiply and 2 additions/subtractions), per sample.
There might be some constants to shake out depending which DFT formula you decide to use. The sign convention on the DFT is arbitrary, so I often get mixed up (F1 could be e^(-2*pi*i*k/N){k=0,...,N-1} if I'm wrong).
Note that they call it «DFT», and not «FFT», and that's presumably because it doesn't have to do much with FFT... whereas DFT is a more generic term for whatever computes harmonics on blocks of discrete signals.
The access to the sliding DFT article is reserved to IEEE members...
Le 2012-02-03 à 10:10:00, Charles Henry a écrit :
As I understand it, it's just a sheer mathematical fact expressing the trade-off between the std deviation of a distribution in the time domain and the std deviation of the fourier transform of that distribution.
Well, there are not-so-obvious implications, and this is why there are several different reformulations of the principle (position vs momentum ; energy vs time ; sampling signals ; entropy ; etc).
Even when just looking at the version(s) about sampling, different ideas come out of it. You can't get information about a frequency if you don't take enough time to measure it or if you don't take enough measurements. Otherwise it may have been any other frequency. Whenever you do sampling, you have to either suppose or ensure that the analog signal does not contain frequencies that will confuse you. Otherwise, you'll get aliases.
This might be expressible more precisely in terms of standard-deviation, but I haven't really studied the topic of sampling to that level of depth.
I agree that it's a hard limit, but it's really just a math problem to begin with :)
Math is an excellent source of hard limits. There are not as many hard limits that require you to look at the universe to find them.
What's the speed of the sliding DFT ?
I can think of an O(n) algorithm for it (per sample). I'd be surprised if there's anything better.
Well, if you have to produce n slightly different values at the time of every sample, that implies O(n) steps. That's a hard limit unless lazy evaluation.
But if you have different rates for different harmonics, you might be able to skip more computations...
You'd still have to do an FFT to begin with, and after shifting by m samples, you'd be in O(m*n) territory.
Shifting what by m samples ?
A sliding DFT doesn't imply a FFT. It could mean doing the long form of the DFT but in a gradual fashion... a slow integrator just like what you could be doing using [rpole~ 1] on the result of [*~] [osc~].
I see some ways those steps could be made simpler. Lump the 1st and 3rd steps together and just add s(n)-s(0) to all the DFT values, then multiply once by F1 (performing a circular shift in the time domain).
I was going to say that...
On the next iteration, use s(n+1)-s(1). So, the total cost is a buffer of N samples, a complex vector of N DFT values (F1)---N+1 additions/subtractions, and N complex multiply operations (each is 4 multiply and 2 additions/subtractions), per sample.
BTW, if you multiply F1 by 1+i = sqrt(2i), then you can use a partially-precomputed version of Gauss' multiplication shortcut... but it doesn't make an improvement of order.
http://en.wikipedia.org/wiki/Multiplication_algorithm#Gauss.27s_complex_mult...
There might be some constants to shake out depending which DFT formula you decide to use. The sign convention on the DFT is arbitrary, so I often get mixed up (F1 could be e^(-2*pi*i*k/N){k=0,...,N-1} if I'm wrong).
The sign used in a forward DFT depends on whether you are using cos(t+offset) vs cos(t-offset). If your phases are mirrors of everybody else's, then your forward DFT's sign will have to be the opposite of everybody else's too.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC