Josh Lawrence wrote:
On Fri, Nov 28, 2008 at 3:10 PM, Ilias Anagnostopoulos I.Anagnostopoulos@sheffield.ac.uk wrote:
I have used PD as a 24/7 source encoder/streamer, using mp3cast~, in the OtherSide project. This was a server that had PD running a synthesis patch which can be controlled by OSC through IRC and a bot listening on a chatroom there. It has been up for the past 3-4 months non-stop and with no maintenance within the University of Sheffield intranet.
forgive me if this is old news to everyone here, but I find this _extremely_ interesting. essentially, using irc as a livecoding environment! sounds very cool, I would love to hear more about how you did this. :)
Thanks Josh,
The full code for this is located at the SVN repository of the OtherSide server (http://otherside.servebeer.com/software/otherside/). I did it by using a machine as a dedicated headless server. I installed Ubuntu Linux Server-edition, installed Apache2, Icecast2, Festival, IRCD-IRC2 and CGI:IRC from the repositories, and installed X11 and PD-extended. The PD-side of things is basically a synthesis engine listening for OSC messages. The Apache server hosts the website from where you access everything. The Icecast streams the audio in the form of an internet-radio stream, receiving the source from the mp3cast~ PD object (Unauthorized Library by Yves Degoyon if I'm not mistaken). The PD patches do not use a dac~ since I'm not interested in having sound in the headless server.
So PD listens on two different local ports, one for OSC, and one for raw data that is strictly not OSC. I created a Python IRC-bot, that works in a similar fashion to an infobot. It sits on a chatroom and listens for messages. When it receives messages it can understand, it forwards them to the appropriate place. When it receives messages it doesn't understand, it forwards them to the port that PD listens for raw data. The raw data is actually ASCII data from the text in the chatroom, which PD translates to MIDI pitches.
If you send a message preceded by the word "osc" then the word right next to osc acts as the "path" and the numbers next to it as the values. These can control midi note velocity, duration, effects, signal processing and so on. This information is forwarded by the Python Bot to the PD port listening for OSC. Other OSC commands control a wavetable oscillator going through effects as well.
If you send a message preceded by the word "talk", the bot writes everything after "talk" to a buffer, which is fed to "Festival", the speech synthesis system. Festival performs a text-to-wave conversion, and "talks" what you've written. PD then plays back the "speech", going through effects controlled by the OSC.
I like the idea of the IRC because it's an old-and-trusted method of on-line interaction of several people, with built-in functions to support exactly that. No use in re-inventing the wheel. The actual idea is that people can create their own chatrooms where the Bot can be invited, for a specific "sound" or "phrase" or "experiment", or interaction with other bots. For instance, you can create a chatroom, connect your own bot to it, which reads OSC directly from an OSC controller such as the Lemur, and pipes it to the IRC channel for the OtherSide bot to read!
Then you also have chatrooms for people to just talk about experiments, without the bot being present so without the discussion altering the sound, unless desired!
The two distinctive strong points are the ability of IRC servers to connect to each other forming "networks", which essentially share all chat data, and split the network traffic to many servers, only needing to share control data between them, since the same sonic output is generated on each server by this data! The other point is that unlike projects such as NetPD, users do not need to have PD or any specific software installed on their computers, they can work directly from their web-browser through CGI:IRC (platform-independent). This also solves the "latency problem" of collaboration of many people in the same room, since only ONE machine needs to output the sound that ALL of the people work on creating.
This was actually my Master's Thesis, I've written a paper on it available from the website I've mentioned, including block diagrams and configuration files.
I might package this at some point, although it is going to be quite a massive package. Let me know if I you need more explanations on the paper or anything like that. Feel free to download the source and try it out, it's GNU GPL licensed.
-Ilias