On 2/25/12 2:49 PM, Hans-Christoph Steiner wrote:
On Feb 25, 2012, at 4:14 PM, Phil Stone wrote:
On 2/25/12 11:43 AM,
Mathieu Bouchard wrote:
Le 2012-02-25 à 12:32:00, patrick a écrit :
I wish I could code an external
like he's coding:
http://vimeo.com/36579366
you're not mentioning which part of this extremely long
video you are referring to, and this player does not allow
skip-ahead, which means I can't fast-forward faster than
the download speed of the whole thing. This is why I won't
try to figure out what you mean.
It's well worth watching, all the way through. It was a
"eureka" moment for me -- I now see the potential of
"live-coding."
I agree that instant feedback is very important, that's a big
reason why I use Pd. I wonder if he's ever used Pd. Pd has been
providing a lot of that experience for almost 2 decades now. The
one thing in it that Pd does not provide is the ability to click
on the generated image in order to see which code is generating
that part of the image. That would be a nice feature to have.
But I can't see how you would generalize beyond drawing pictures.
Drawing with code is basically the easiest realm to solve that
particular problem, IMHO. How would you click on the sound to see
the code that is generating it? How would you click on a mail
program, a network service, file encryption?
Visual representations of audio/music parameters are quite common
(spectral plots, traditional and non-traditional notation, spatial
controls, etc.). Having a two-way connection between graphical
representations of audio/music and the underlying code would be
quite useful and conducive to empirical experimentation.