On Sun, 23 Dec 2007, Charles Henry wrote:
To split hairs, we want to constrain the total energy in mixing signals, which means we have to expand the inner product.
I mentioned convex spaces possibly because you can deform your space so that you don't have to do it with the inner product. If each of your dimensions' values is an energy level instead of an amplitude, then instead of forcing the inner product to be 1, you can force the simple sum of all components to be 1, and that's a linear equation instead of a quadratic equation. It changes the nature of how you cross-fade between components, but that doesn't mean that it changes it in a bad way.
This ensures that we have a solid homotopy, where we're not interpolating outside of our space (I stated this wrongly in the first place!). so, |x|^2=|y|^2=1 and |a*x+b*y|^2=a^2*|x|^2+b^2*|y|^2+a*b*<x,y>=1
What makes you think that? An inner product is only guaranteed to be sesquilinear; in other words, it's "conjugate-commutative": some kind of hybrid between commutative and anti-commutative.
You also forgot to multiply a*b*<x,y> by two because even in the commutative case you have to count it twice.
|a*x+b*y|^2 = a^2*|x|^2 + b^2*|y|^2 + a*b*<x,y> + a*b*<y,x> = 1
If you don't use complex numbers you probably can say <x,y>=<y,x> and then you can write it like:
|a*x+b*y|^2 = a^2*|x|^2 + b^2*|y|^2 + 2*a*b*<x,y> = 1
And then I don't know where you are getting to with your simplifications, but really, if you use flatten it into a convex space, it looks a lot friendlier for interpolation.
With a |x|=1 kind of space, the only nice stuff you can do on it is action by an orthogonal matrix space. I wouldn't enjoy to have to mess with square roots on that. OTOH it could be that the convex space thingie is unusable in practice because one would want to work with amplitudes instead, but I haven't really tried... It's not like I plan doing anything with those structures any time soon.
I've always been fascinated (obsessed maybe? meh) with convolution operators. I have often said some wrong things about these, but later worked out proofs of general properties that are essential. L1 norms and L2 norms are the most important. Convolution preserves L1 norms (proof on request) in the following way (here | . | represents the L1 norm, |f| = integral( -inf, inf, |f|dt) and * is convolution |x*y| = |x| |y|
In that case it might be easier to write slightly more verbose formulas than having to explain the formula... e.g. L1(conv(x,y)) = L1(x)*L1(y), where * is the ordinary product.
and in the L2 norm shown here with same notation |f| = sqrt( integral( -inf, inf, f^2dt) ) |x*y| <= sqrt( |x| |y| )
BTW, note that the L2 norm in the spherical space is (isomorphic to) the L1 norm in the convex space. (BTW, from now on, I will only use x,y to talk about vectors in the spherical space, and will use different symbols to talk about the convex space, e.g. convex(x) and convex(y))
To me, convolution makes a good operator for consideration in this type of space. Maybe there's a modification to the definition we can make to be sure that |x*y|^2=1 ?
Well, you could define the normalised convolution product as being conv(x,y)/L2(conv(x,y)) ?
Let's say F(x),F(y) are Fourier transforms of the x,y vectors. Then the convolution of x,y is a componentwise product (representable by diagonal matrices if you prefer that, but i'll call it cp), according to the Convolution Theorem, and F is energy-preserving, according to Parseval's theorem. So F(conv(x,y)/L2(conv(x,y)) = F(cp(x,y))/L2(cp(x,y)). Does this get you further in any way?
Actually, note the difference with convex space: in an affine space, you are not restricted to a>=0 and b>=0. I can only call the latter a convex sum because energy is nonnegative. (Btw, are the values in the vector supposed to be energy values or amplitude values?)
The values in the vector should be amplitudes of orthogonal components, right?
In the convex space, no, you deal directly with energy... but I suspect that if you want to interpolate between timbres, it's better to linearly interpolate energies instead of amplitudes, as it keeps total energy constant.
Then, dissonance arises between pairs of frequencies by a nonlinear function N(X) which takes the dissonance between each pair and creates a vector of all possibilities. diss(X)=N(X)*A*N(x)/2 where A= [0 a1*a2 a1*a3 a1*a4 .... [a1*a2 0 a2*a3 a2*a4 ..... [a1*a3 a2*a3 0 a3*a4 ..... where you see a1*a2, etc... I mean for it to be sqrt(a1*a2) The elements are on the diagonal are zero because a single frequency makes no dissonance with itself.
I don't believe this function. I'd expect the diagonal elements to follow the same pattern as everything else. Then I'd expect the amplitudes to be the elements of X and I'd expect the frequencies to be the indices of X. I'm completely lost, but something like sqrt(a1*a2) definitely looks wrong. It needs to be a formula such that when you combine a1 with itself, a2 with itself, etc. it will give zero naturally, without having to make an exception.
Plus, there's the added complication of non-linear critical band rate. So, the dissonance function is different for different registers (like the difference between a major third harmony in bass as opposed to treble).
It's not "Plus"... I already mentioned this. I called it the window size. You adjust it to match the boundary between tone and rhythm, and it is this boundary that causes bass harmony to be touchier.
Attached is my resurrected/re-designed dissonance curves patch. The last thing I was doing with it was to look at dissonace relations with odd-harmonic series. Yes, it's very crudely coded--see for yourself--if anyone wants to collaborate, I'm up for a fundamental overhaul.
Hmm... I haven't looked at it yet.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada