You accidently had html-mail switched on. :)
Argh. Stupid Thunderbird.
In this context qlist and textfile are eqivalent: qlist is just a textfile with a metro/delay and a sender already built in. Both objects do things inside of Pd and the metros work with subsample accuracy. I don't think, you can take them as a model for communication with external processes: Once the file is loaded into a textfile's or qlist's memory, there's no communication with the outside world (network, harddisk, other processes) happening anymore.
Ah, okay, that's good to know.
I guess my main question was, what's the best way to send event lists from the language to either of these objects?
I think, a better model for realtime interaction might by OSC with timetags.
Right, but in an embedded context (i.e., pd embedded in another language), OSC seems to me to add unnecessary overhead. See above question.
qlist can send lists just fine. Both qlist and a textfile with sender/outlet and metro-timing simply send Pd messages like floats, symbols, lists, etc. It's up to the composer to define, what's describing an event and what describes a sequence.
Right, uh, but that's the *output* from those messages. (I don't think either can receive as flexibly as that.)
The notion is that the language side of things - Java, C++, Objective-C, Python, whatever -- will have the logic that determines how events are scheduled, and would handle user input that might alter the sequence of those events. The question is how best to have the *language* communicate with Pd.
So, the structure would be: external logic > message to Pd > qlist/textfile scheduling inside Pd
sound source in Pd > audio callback in the embedded instance
We'd continue to use libpd, but rather than assuming the Pd patch contains logic for how the events are scheduled, those would be integrated with the logic and interface contained in the code. (So, on iOS, for instance, in Objective-C or something.)
Am I making a bit more sense? I was just unsure how best to handle (and recommend to others how to handle) the interface between Pd and the outside world, if Pd is acting in real-time but responding to changing sequences from code written elsewhere, outside the logic of the patch.
Peter
On Tue, Feb 15, 2011 at 05:56:36PM -0500, Peter Kirn wrote:
The notion is that the language side of things - Java, C++, Objective-C, Python, whatever -- will have the logic that determines how events are scheduled, and would handle user input that might alter the sequence of those events. The question is how best to have the *language* communicate with Pd.
So, the structure would be: external logic > message to Pd > qlist/textfile scheduling inside Pd
sound source in Pd > audio callback in the embedded instance
As you probably know, in RjDj/libpd the usual way to communicate between the App and the Pd instance is to send messages between both via some kind of socket (network or RPC or so). The main issue is one of synchronizing different "clocks": Pd has a clock inside, usually synchronized with the soundcard (which may be a virtual "dummy" card in libpd), but events from the App may not be synchronized to that, but instead to some GUI loop, network polling mechanism etc.
A GUI event may happen at a certain time compared to the GUI loop clock, but what would be the time, it should happen in Pd's "soundcard clock"? You have to find a some mechanism to reliably compare the different clocks, then it's easy to translate times between both time scales.
OSC with timetags would provide a way out, but even if the OSC overhead seems to be too much (which I doubt), you still need some kind of timetag for messages.
We'd continue to use libpd, but rather than assuming the Pd patch contains logic for how the events are scheduled, those would be integrated with the logic and interface contained in the code. (So, on iOS, for instance, in Objective-C or something.)
Actually I think, Pd is great for scheduling (musical) events, so I'd not put that into the interface code. The Pd side is probably, where the musicians and composers work and they are the ones who need to deal with sequences, timing, rhythms etc.
I'd figure that most graphical interfaces don't need such a tight subsample timing as Pd can provide, the eye is much less timing sensible than the ear.
It becomes slightly different when you deal with inputs, for example tapping in Tap-Tap-style games where timing and latency become more important. Any midi capable realtime audio software has to synchronize external midi events to their own, much faster clocks, so these apps may be worth a look (or Pd's midi objects).
Frank Barknecht Do You RjDj.me? _ ______footils.org__
Hi Peter,
On 15 Feb 2011, at 22:56, Peter Kirn wrote:
The notion is that the language side of things - Java, C++, Objective-C, Python, whatever -- will have the logic that determines how events are scheduled, and would handle user input that might alter the sequence of those events. The question is how best to have the *language* communicate with Pd.
So, the structure would be: external logic > message to Pd > qlist/textfile scheduling inside Pd
sound source in Pd > audio callback in the embedded instance
Can't the caller just communicate with Pd by passing messages through the libPd API? I thought the whole point of libPd was that you have your code and Pd's code running in the same process, with a lightweight wrapper in between, so there should be virtually zero latency in passing messages from your code to a running Pd. Certainly it shouldn't require timestamping to de-jitter. Or maybe I'm missing the point of your question...
Jamie