Sorry, didn't reply to the list before.
I'm going to leave it as is but I have stuff to add...
Pd and its community is bloody marvellous. A few hours ago I knew nothing about vbap or ambisonics apart from going to some concerts/installations where they have been used and occasional list-chatter. I now have a basic working knowledge, with some additional questions that I think I can locate answers to fairly soon (well not today, I've pretty much had enough now).
Hopefully I can do some coding and get myself in a studio and hear what's going on and code some more. No doubt there will be more questions to come but I at least feel I can get started.
Many thanks for holding my hand (so to speak) with this Matt and big props to all the IEM bods, especially Georg Holzmann for his spatialisation tutorials.
(Big aside here) Speaking of IEM, I have an ongoing discussion/argument with a MaxMSP/IRCAM advocate who is most disparaging about the whole FLOSS thing. I think IEM set an excellent example for what I like to think of as the right side to be on, a real 'rebel alliance' if you will. Like IRCAM are going to make their no doubt millions of Euros worth of yearly funding back by selling a few Max externals.
Rant over. Calm is restored.
Cheers,
Julian
Right then Matt,
I'm going to pester you (chisel as we say in Northern England). I have attached my current stereo square root panner (I can make it equal power if that's the best way to go) - am I right in thinking I need to add and subtract 90 degrees to this signal to get the basic quad working. If so, how and where?
I'm sorry, I have looked at the supercollider code, and although it seems quite straightforward, I have no idea how to implement that in Pd?
I am in the process of ripping my patch apart and using 2 instances of Pd to help with the cpu load but I must admit to being wary of doing too much on that front until I know what's happening with everything in the patch then splitting the load when things are a little clearer.
Going through the Holzmann tutorials as we speak. very good too.
On 14 March 2011 21:15, Matt Barber brbrofsvl@gmail.com wrote:
Probably the best bet would be to make an abstraction for spatializing an individual partial that has the panner built in at the tail which would throw~ the 4 channels to catch~es elsewhere in the patch. Then it wouldn't take much work at all to swap it out with an ambisonic panner later that took the same input sound, X, and Y inputs at a later date. With that deadline maybe I would then make some kind of equal-power quad panner (e.g. the product of a front-back and left-right panner like supercollider's Pan4) or use an external.
There's still the CPU question, which might be rather severe with 48 of these going on at once, plus whatever else is going on. I have once in a while needed to sync up two computers via lightpipe and netsend, doing all the control and initial processing on one end and then final processing on the other end... and it's good if timing doesn't have to be sample-perfect. Of course you can't send 48 channels across lightpipe, so maybe you could have one computer calculate all the swarm stuff, do anything with the accelerometer, and run Gem, and just send info via netsend/netreceive to the other which would be running your sigmund~ setup and the spatialization. Note that you're going to have to use line~ for the X and Y panning coordinates so that you don't get discrete jumps in the panning location.
Too many options!
Matt
On Mon, Mar 14, 2011 at 4:40 PM, J bz jbeezez@gmail.com wrote:
Hey Matt,
Well, good to hear the zeitgeist hasn't completely deserted me then:)
Yes I'm sorry, I wasn't very clear on what it is I'm working with. That thing of, I've been working for ages on this and can't quite understand
why
it's not bleedin' obvious to everyone else...
So, ambisonics then - eek!
Well I have to say my trigonometry is dreadful, not something I'm proud
of,
just wish I had paid a little more attention in maths classes as a
teenager
- too busy dreaming of pop stardom and how I would never need any of this
stuff. Many is the time when I have heard the ghost of my maths teacher snickering over my shoulder since I got into Pd.
I'm gonna do some experimenting and reading up before I dive into
anything.
There seems to be the Holzmann tutorial, plus all the iem stuff to wade through. The Cubemixer looks interesting but also hefty (back to the possible overkill) and also as msd doesn't work on my usual Puredyne OS
I'm
moonlighting in W7, so super unsure about compiling stuff in W7 plus the performer runs on a mac, so then setting it up for his machine as well. Aaargh. Plus the 1st performance is now only 2 weeks away!
I think the simpler the better basically.
Cheers for weighing in though Matt, hopefully speak soon.
All the best,
Julian
On 14 March 2011 17:12, Matt Barber brbrofsvl@gmail.com wrote:
Swarms are in! A pal of mine is doing something very similar: http://www.youtube.com/watch?v=Ao258ciSMSg
I misunderstood your space before -- you have 48 things that you want to pan around a 2d space, but I thought you meant you wanted to pan stuff around a 2d "grid" which itself had 48 points. If it were me I'd almost certainly use ambisonics with the projection I mentioned before, but you'd have to do some trigonometry to map x-y to the surface of a half-sphere above the space, and I think you'd have to figure out a way to scale to keep the power the same. The reason I'd use ambisonics is that I would not want to have to redo the engine for a different speaker setup -- I could just throw the encoded stream to another decoder.
On the other hand, there are other, simpler ways of doing 4-channel panning if you're committed to a 4-channel setup. There are probably externals I don't know about, or you could model it after something like the Pan4 UGEn in SuperCollider, which uses a simple product of two equal-power curves (using trig functions), one for left-right and one for front-back, such that the front-left speaker gets left*front*input, the back right gets right*back*input, and so forth.
The thing about moving stuff around in space like this is that there are some situations (probably not this one) in which you'd want to also simulate doppler shifts. Panning doesn't do this, but you can simulate all that stuff with delay lines (and again, if you really wanted to do it, you'd need a variable delwrite~ rather than a variable delread~ == vd~).
On Mon, Mar 14, 2011 at 11:41 AM, J bz jbeezez@gmail.com wrote:
I probably haven't described the space particularly well, I attach a picture to hopefully explain a little clearer.
It's a 'swarm' of [msd] masses and links, with only the masses
visible.
The swarm is in a zero gravity space so they just float around. The space receives bangs/force at various points on the x-y grid triggered by an accelerometer attached to the performers instrument. So the swarm is
in
constant motion, sorry if that wasn't clear.
At the moment I'm just using the coordinates on the x plane of each mass, which msd handily spits out. I have patched in the option of using
the
y coordinates but I haven't used them as of yet. Each mass controls the pan position of a partial (of which there are 48) from the instrumentalist and fed into [sigmund~].So the pan position of each mass/partial is
slightly
different in the (currently) stereo field across 0-1.
Hope that helps,
Jb
On 14 March 2011 14:55, Matt Barber brbrofsvl@gmail.com wrote:
Can you describe your 2d space a little? Is there a reason for
wanting
48 discrete spots rather than one continuous space? I actually think the 48 spots could work, but I'm curious how it is supposed to sound when something "moves" through the space (or do sounds just pop up periodically at those discrete spots)?
There is another ambisonic trick I have heard of but haven't yet tried, which is to add a 3rd dimension in the encoding but still decode to a 2d space. The space then becomes a kind of projection of the 3d space onto a 2d space, so you can "move closer to the center" by increasing the "elevation" of the direction. "Directly upwards" in the encoding sends the same signal to all 4 speakers on decode, so that it sounds "in the middle" of the space.
Matt
On Mon, Mar 14, 2011 at 10:22 AM, J bz jbeezez@gmail.com wrote:
Hey Matt,
Thanks for pushing my understanding along...
I should have said that the pan positions are constantly shifting
for
48 separate points within the x-y grid so would be, I presume, heavily cpu intensive with some of the solutions you propose. The patch is running off the performers lappy and is already doing a lot within Pd.
My supervisor, the composer Aaron Cassidy, thought that as the GEM window is projected as part of the performance with the msd swarm, that it is somewhat 'dishonest' to only have a stereo field when the visuals are obviously moving on the y plane as well. And annoyingly I have to agree with him. You know when someone has said something and one feels (me in this case) that the genie is out of the bag and there is no going back.
Cheers,
Julian
On 14 March 2011 13:48, Matt Barber brbrofsvl@gmail.com wrote: > > Ambisonics isn't necessarily overkill, but it only gets you > direction, > not distance -- it's only a "1-dimensional" solution, in the sense > that you'd be panning around the outside of a circle but not to > locations within that circle. It's not terribly CPU expensive. > If you do want distance as well you can use some combination of > delay, > low-pass filtering, and wet-dry mix of any reverb you happen to be > using (and if you want to get really ambitious, you can also > simulate > individual room reflections). This starts to get CPU intensive > especially if you're going to be moving sounds around. If they
just
> stay in place, it's not as bad (and the [delwrite~] [vd~] model > doesn't actually model moving sources a certain distance from the > virtual "microphone" -- it models a moving microphone, so a simple > Pd > solution isn't quite available). > > Matt > > > > > Hey all, > > > > So I'm still scratching my head with controlling audio panning
in
> > an > > x,y > > grid using 4 speakers. > > > > What, at first, seemed like a somewhat trivial problem, upon > > closer > > inspection ain't necessarily so. > > > > Does anyone have examples of panning with 4 speakers? About the > > only > > things > > I have found so far are: > > Building upon Hans' [pan_core~] > > [pan_quad~] from 'nSLAM', which seems to be unavailable for d/l > > The 'pd-tutorial' patch > > > > > > > > 3-9-3-1-spatial-quadro.pd<
http://pd-tutorial.com/english/patches/3-9-3-1-spatial-quadro.pd%3E
> > > > Anyone know about any others? > > > > My stereo panning is built upon the square root example in Andy > > Farnell's > > book, which I'm really happy with. Ideally I would like to
expand
> > upon > > that. > > > > Then of course there is the whole vbap/ambisonics/cubemixer > > (possible > > overkill) route too. My concerns before diving into this are > > threefold. > > 1. If it's only going to work for the 2 people in the
'sweet-spot'
> > then > > what's the point? If it makes any difference I'm using some
Bose
> > 180 > > degrees radiating speakers. > > > > 2. The piece is for a performer with electronics, with only the > > electronics > > coming out of the speakers. So I'm also wondering whether a 5.1 > > type > > setup > > with the performer as the front centre speaker (as such), may be > > preferable? > > > > 3. The piece requires 96 separate pan positions - how cpu > > intensive > > is > > that > > going to be? It currently works fine in stereo, which is why 2 > > lots > > of > > stereo in an x,y fashion seems initially preferable. > > > > I'm aware that it's pretty much impossible with this speaker
setup
> > to > > make > > everyone in the room have a similar audio experience, and also
to
> > map > > my > > visual masses from an msd swarm in the GEM window as audible > > points, > > but > > surely it is possible to do this as an x,y grid. > > > > I also admit that I don't understand the dsp theory necessary to > > make > > the > > x,y, grid idea happen so if anyone has some pointers regarding > > that I > > would > > be delighted. > > > > Cheers, > > > > Jb