I apologize to chuckk, for not sending these posts to the right place.
About frequency analysis...... It gets hard to explain how we use complex numbers in frequency analysis without going to higher mathematics. Simply shifting the signal by a constant does not get rid of complex numbers! The complex numbers do not necessarily come from "conservation of energy": the complex numbers are the eigenvalues of your basic oscillator x'' + w^2
w is the angular frequency of oscillation, and the eigen values of the solution are i*w and -i*w (the eigenfunctions of this equation are e^iwx and e^-iwx, there is another type (the damped harmonic oscillator, which has complex rather than imaginary eigenvalues))
When we perform frequency analysis we partition up the energy in the signal into waves (with very specific phases), with distinct eigenvalues (the frequencies). The energy is directly related to the square of the magnitude of each coefficient, and the square of w.
Shifting by a constant only adds more energy to the "constant" signal (w=0).
The Fourier transform is at the heart of DSP for a specific reason. Any signal can be *exactly* represented through a change of basis (change of basis here means a transformation from one set of numbers to another through a one-to-one function that preserves energy (the metric))
R= set of real numbers C= set of complex numbers
R2= a two-dimensional set of real numbers
Everybody knows what R2 is, the x-y plane, so it's a good starting point. A distance between the origin and (x,y) is sqrt(x^2 + y^2). This is called a metric, a measure of distance.
Okay, so suppose we have sqrt(x) where x is R. (I'll use "is" to denote "belongs to") For each x in R, we get back a number, which means it belongs to a "set."
sqrt(x) is (0,inf), when x is (0,inf) , I'll abbrev. inf for infinity
The result which produces the complex numbers is analytic continuation. When x is R, the result may or may not belong to R, but is definitely a number. When x is negative, we have a sqrt(negative number in R), which is defined as an imaginary number (just a name, still a number).
again, analytic continuation: So, then we can have the sqrt(x) when x is imaginary! And the result is a number halfway inbetween the imaginary numbers and R. It is not one or the other!!! If you work this out, you can show, simply by definitions, that it is the sum of an imaginary number and a real number. Therefore, we can define the complex numbers as the sum of real and imaginary numbers.
R2 is a 2-dimensional vector space, meaning that it takes two linearly independent (not the same line) vectors to span the entire space with linear combinations a*w+b*z, where w and z are vectors. Now, to make things useful, we define an inner product on R2, <w,z>= w1*z1+w2*z2 the metric, or norm, is sqrt(<w,w>) = sqrt(a^2 + b^2)
A very similar thing happens when we define an inner product and a metric on C for A and B in C, <A,B>=A*(conj(B)) (conj(B) stands for conjugate of B) and the norm, or metric, is |A| = sqrt( <A, conj(A)> ) |A| = sqrt( A1^2 - i^2 * A2^2)=sqrt( A1^2 + A2^2)
meaning the definition of a metric space on C is isometric (has the same norm) as the metric space defined on R2. Notice that the two spaces have different inner products, but the same norm?
In R2, the inner product of two vectors (w,x) and (y,z) is <(w,x),(y,z)>= w*y + x*z
In C, the inner product of w+ix and y+iz is <w+ix,y+iz>= (w+ix)*(y-iz) = (w*y + x*z) + i*(x*y - w*z)
The difference between the two is: Over the set of R2, we can have perpedicular vectors which cover the space, but in C, none of the vectors are "orthogonal" to one another unless one of them is identically zero. (This result is not very interesting I think, but it includes why the product of two complex numbers is a "rotation").
The *real* result that we want comes from the analytic continuation of e^x, where x is R. e^x : x |-> (0,inf) when x is R *but* e^x : x |-> C when x is C
Also, many important properties are present: z=e^x, then conj(z)=e^conj(x), for x is C cosh(x)=(e^x + e^-x)/2 cosh(ix)=cos(x) and cos(ix)=cosh(x) for x is R e^ix = cos(x) + i*sin(x), for x is R
So, any number in R can be represented by e^x when x is C, and any sinusoid with any phase shift can be represented by a sum of z*e^ix + conj(z)*e^-ix
To show why a signal can be exactly represented by its transform is even more tricky, and requires the definition of Hilbert spaces, the set of continuous functions C0, the polynomials, trigonometric identities on cos(n*x), and the Lebesgue integral.....I don't want to go through all the details, right now. It really is a *long* explanation.
The result is:
The Fourier transform is an isometry (metric preserving transform) between C^N (N complex numbers in a vector space) and C^N, where the coefficients of the transform are the relative phases of e^inx terms. The real numbers are a special case, where the first N/2 coefficents in C^N represent the phases and amplitudes, and the other ones are just complex conjugates of the first N/2.
When you analyze a signal (real-valued), the complex numbers don't exist as parts of the signal or placeholders for missing energy! They are only indicators of phases of sinusoids in the frequency domain. The real place where complex values are: the eigenvalues of harmonic oscillators.
-- Charles Zachary Henry
anti.dazed.med Med student who needs a Mickey's
On Sat, 12 Nov 2005, Charles Henry wrote:
The complex numbers do not necessarily come from "conservation of energy"
Indeed, there are other ways to explain conservation of energy, basically by invoking kinetic energy; and there are other ways to justify complex numbers in wave phenomena.
the complex numbers are the eigenvalues of your basic oscillator x'' + w^2*x = 0
Let me take this apart:
those complex numbers are eigenvalues because eigenvalues are obtained by factoring the characteristic polynomial of the matrix.
Factorization of elements of R[L] (polynomials with Real coefficients with a single variable called L) can yield irreductible elements of degree 2, that is, L*L + positive constant.
The reason for introducing complex numbers is that they make factorization smoother by allowing all polynomials to be factored down to terms of degree 1. And then "L*L + positive constant = 0" means "L*L = negative constant", so the only way to find L here is to invent a number whose square is a negative constant.
Inventing extra numbers is allowed as long as they stay consistent with the number system they are based on. So the Complex numbers are called an Extension of the Real numbers because + - * / on Complexes are intuitive extensions of those same operations on Reals.
Indeed, playing with Complexes feels like playing with a very limited version of polynomials on Reals, so you can do it with 8th grade algebra.
Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju Freelance Digital Arts Engineer, Montréal QC Canada
The complex numbers do not necessarily come from "conservation of energy"
Indeed, there are other ways to explain conservation of energy, basically by invoking kinetic energy; and there are other ways to justify complex numbers in wave phenomena.
the complex numbers are the eigenvalues of your basic oscillator x'' + w^2*x = 0
Let me take this apart:
those complex numbers are eigenvalues because eigenvalues are obtained by factoring the characteristic polynomial of the matrix.
And in this case, we are talking about the functions in a Hilbert space, with eigenvalues of the differential equation
x'' + w^2*x: The eigenfunctions are just e^-iwx and e^iwx, w is the eigenvalue. Our "matrix" in this case is the Hilbert space L2. It's not like you can take the determinate of a set of functions, the linear operator can be factored:
x'' + w^2*x=(D^2 + w^2)x=(D + w*i)(D - w*i)x, where D is the differential operator, Your eigenfunctions are the solutions to : (D + w*i) and (D - w*i) We can interchange the orders of (D + w*i) and (D - w*i) so, the set of eigenfunctions is the solutions to either (D + w*i)=0 or (D - w*i)=0
It's the same result you have for a matrix, except you have different linear operators, like matrix multiplication instead of the D's. Then, you find the eigenvalues of your matrix operations like, Ax-bI =0, where I is the identiry operator
The Fourier transform is an operation which can be performed using NxN matrix multiplication on a sampled signal of length N, although in the Hilbert space, we use the inner product on L2. It's no surprise that the matrix F, for the Fourier transform, X(f)=Fx(t), can be diagonalized, because it is unitary, and all of its eigenvalues are 1.
Factorization of elements of R[L] (polynomials with Real coefficients with a single variable called L) can yield irreductible elements of degree 2, that is, L*L + positive constant.
The reason for introducing complex numbers is that they make factorization smoother by allowing all polynomials to be factored down to terms of degree 1. And then "L*L + positive constant = 0" means "L*L = negative constant", so the only way to find L here is to invent a number whose square is a negative constant.
Inventing extra numbers is allowed as long as they stay consistent with the number system they are based on. So the Complex numbers are called an Extension of the Real numbers because + - * / on Complexes are intuitive extensions of those same operations on Reals.
Indeed, playing with Complexes feels like playing with a very limited version of polynomials on Reals, so you can do it with 8th grade algebra.
Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju Freelance Digital Arts Engineer, Montréal QC Canada
-- Charles Zachary Henry
anti.dazed.med Med student who needs a Mickey's
On Sun, 13 Nov 2005, Charles Henry wrote:
x'' + w^2*x: The eigenfunctions are just e^-iwx and e^iwx, w is the eigenvalue. Our "matrix" in this case is the Hilbert space L2. It's not like you can take the determinate of a set of functions, the linear operator can be factored:
Look, it's not like in the PureData world anyone would bother with infinite sample rates.
What I wanted you to understand is that around here it doesn't matter that it's possible to consider infinite-dimensional spaces, so you don't need to bother with satisfying convergence because the inner product isn't an infinite series, so you don't have to consider function spaces as any different from classic vector spaces, so all your functions are representable by finite-dimensional vectors, so you can always take the determinant of a linear transform -- all that because the sampling rate is always finite and the block size is also always finite too.
Anyway, infinite sample rates are physically impossible, let alone engineerable, let alone noticeable by the human ear.
Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju Freelance Digital Arts Engineer, Montréal QC Canada
On Sat, 12 Nov 2005, Charles Henry wrote:
The Fourier transform is at the heart of DSP for a specific reason. Any signal can be *exactly* represented through a change of basis (change of basis here means a transformation from one set of numbers to another through a one-to-one function that preserves energy (the metric))
What you're describing is not just a change of basis, it's an isometric one. An ordinary change of basis could change the definition of energy in any possible way, as long as it has an inverse change of basis that can go back exactly to the original basis.
Okay, so suppose we have sqrt(x) where x is R. (I'll use "is" to denote "belongs to")
That would be confusing. It's much better to use "is in" or even "is a".
To show why a signal can be exactly represented by its transform is even more tricky, and requires the definition of Hilbert spaces, the set of continuous functions C0, the polynomials, trigonometric identities on cos(n*x), and the Lebesgue integral.....I don't want to go through all the details, right now. It really is a *long* explanation.
It's only so if you really want to posit the existence of infinitely detailed signals. In practice you have to do away with all kinds of infinity when dealing with actual samples and so you can safely dismiss most of the artifacts of so-called "Real" numbers.
Using [fft~] in Pd is little more than converting from a 128-dimensional space into another 128-dimensional space, which may sound scary and alien, but not as much as any flavour of infinite-dimensional spaces that we may have to introduce in the case of pretending to have infinite precision.
They are only indicators of phases of sinusoids in the frequency domain.
Right. The real vs imaginary parts correspond to cosine amplitude vs sine amplitude. A polar transform (or a complex-log transform) can separate the phase from the total amplitude.
Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju Freelance Digital Arts Engineer, Montréal QC Canada