Maurizio Umberto Puxeddu said at "Re: [PD] Re:[OT] How do your performance environments looks like?."r[2003/04/16 18:08]
For an interesting example of a haptic musical interface, check out the University of York's Cymatic: http://www-users.york.ac.uk/~smr12/
main.htm
This gives me a better idea. It is not by chance that they focus on physical modeling. There are many ways to make sounds with a computer where haptic interfaces makes much less sense.
Of course. Although I think that there are many "pure" synthetic synthesis methods were haptic feedback can be useful as well. example: A midi-fader is also gives some form of haptic feedback, you know the position by touch without having to look at the screen. What I am thinking of are situations where a single controller axis controls multiple interconnected parameters. One parameters is obviously sensable because of the position. Pitch for instance. But an external sound source might determine the portamento or some kind of modulation effect on that sound. Using ff I can make the response of the joystick reflect the synthesis better. For me it is all about designing a performance interface that matches my musical concepts.
During a recent improvisation workshop, some people complained that I was totally inexpressive while playing, even if I was able to make "gestural" sounds and make use of the whole dynamic range which is more extreme than the rest of the players (instrumental).
I noticed that my teacher can (on occasion) use an artificial gestuality (that is, not justified by strict interaction with the device). For me this is not necessarily a problem, this split being a part of the nature of playing electronic devices.
This split is also part of accoustic devices I think. It is also an old discussion in classical music. With a piano the only factors determining the sound are the speed of depressing the keys and the position of the pedals. All movements before and after are "artificial" . There are a lot of bad pianist with fake expresionist movements, there are also very good pianist who move a lot and very good ones who sit like a statue. And then there are conductors and solo guitarist in rockbands, singers ......:)
But I think movement does have an effect, even if only indirectly. It is often easier to get in the rhythm if your body moves with it, for example. I like to move when I perform, an I always stand when I play electronics.
My dislike of laptop performances is more complex than this. I think that a mouse is too simple and too single dimensional an controller for serious music performance and improvisation It is for me anyway. And I think that I can hear this in performances, and see it reflected in the way of sitting at a desk behind a laptop. The attention of the performer seems focused on moving the cursor to place X on the screen, and not on making sound Y in the room.
What I am more interested in is finding ways of making a computer a musical instrument without losing the possibilities. The biggest difference between a computer and a more traditional instrument is in the past and future time of the performance. A traditional instruments acts in the now. A computer has access to what has gone before, and you can project into the future. What I am trying to do is to make an interface that gives me the flexibility of a traditional instrument (instant change and reaction) without losing the extended time-scale of the computer program. To do this I want to have the interface reflect the state of the machine to the player in an intuitive or at least learnable way. One way is graphical, another is haptic. In the example above, the external sound source could also be what I played 1 minute ago. Or something choosen from what I played by pattern matching with neural nets.
So, that is long mail. Hope you find it interesting and not too rambling.
Gerard