Mainly I want to use the computer (pc running
linux) as a musical instrument -- ie construct
ways to input notes, changing volumes, timbres
ect in real time, eventually have a system set
up to run a sequence back with variations,
different instruments, etc. while I play along
with some other voice...
Soundfonts & fluidsynth work very well for
producing basically ear-friendly sounds... Pd
looks ideal for handling HID input, keeping track
of incoming notes, doing interesting things with
these.
But to connect these two things I've been using
csoundapi~ and fluidsynth opcodes. As I understand
this, pd is running a copy of csound in a sort of
virtual box?
Anyway, it gets tricky to hit the same note, same
channel in close succession, because the repetition
going through the fluidengine cuts the first note
off -- and in any case that first note is not
available for separate processing until it comes out
through the fluidOut opcode, mushed together with
everything else sent to that fluidengine.
Running multiple fluidengines in csound is quite
doable, but starts slowing the system down after the
first two or three...