On Wed, 2010-03-24 at 17:23 +0000, martin.peach@sympatico.ca wrote:
reduzierer wrote:
From what I know, there is an internal buffer of ~4kB for the sending
sockets in both netclient and netserver (I can't recall whether this buffer is built into the externals or is part of the network subsystem of the OS). If that limit is hit, the Pd process is blocked until that buffer is emptied again (which leads obviously to audio drop-outs).
IMHO, this is a design flaw, which all sending net-externals suffer from. There is no way to check the state of this buffer, which would be required in order to avoid a buffer overrun. It would be sufficient to get notified only, when the buffer is completely emptied, so that you could design your patch in a way that it would only send the next message when the previous one is through.
IMHO it's not a design flaw, but part of TCP's design philosophy that tries to send packets until they are received. In the original concept, bombs would be dropping all over the countryside, destroying cables and data centres willy-nilly, while messages could still get through to the missile silos and AA gunners.
The fact that those externals/objects block the Pd process has nothing to do with TCP's design philosophy. I am no expert in this field, but I know of other implementations in other programming languages, that handle such situations more gracefully, for instance the python twistd server, which the new netpd-server is built upon.
Try the latest version. I think what was blocking Pd was that it was trying to print thousands of error messages, which is not one of Pd's strong points. The Pd process blocked because [tcpserver] kept trying to send packets and then print an error message whenever it failed. The new version stops trying to send when that happens, until it gets unblocked. I tried it and it works: it stops sending when it can't create sender threads, and then starts again gracefully when it is manually unblocked. All the intervening data is lost of course, but there is a "blocked" message emitted through the status outlet so the Pd patch can use a delay or something to to restart the server.
Ok. I'll check that out, also IOhannes' modified versions.
Imagine an Apache server being completely blocked, because one of its clients refuses to receive the webpage quickly enough.
It's not the same thing, Apache doesn't send huge numbers of messages to clients that don't request them. If a client is dead, a single thread hangs until it times out. It won't try sending anything unless it is asked to.
Hm.. when a client requests a file, the file is being delivered without blocking other processes, even if the requesting client suddenly disappears (then only the specific sending socket's thread is halted). That is what I meant with 'gracefully'. This not the case in Pd's net objects. It doesn't matter at this level, whether the data was requested or not.
The application you are using seems more like a video streaming server, which usually use UDP or something similar, which zero handshaking so nothing hangs. And the receiver sees video that's usually choppy and glitchy because of that.
Yeah, in the case of netpd it is true, that the application is about sending data streams to many clients. UDP wouldn't work, since the system relies on correct order and completeness. Requesting every little piece of data first, doesn't make sense to me, since this would virtually re-implementing the tasks of TCP.
But this is the current situation with the Pd net externals.
Which ones don't work? Only the TCP ones?
Sorry for not being precise. I actually only tested the TCP ones. And yes, all the ones that create sending sockets suffer from that problem.
Don't get me wrong, there is no point in rebuilding Apache in Pd, nor am I demanding from anyone that the situation needs to be changed. Those net externals generally are very useful and cover a wide range of applications, but they fail also in other situations and that has _nothing_ to do with TCP's design,
Situations like what?
Generally this applies to all situations, where you're doing DSP (or any other deterministic realtime stuff) while performing operations, that cannot happen instantly or don't fit into that deterministic scheme, like 'save this multi-megabyte file now' or 'send those lines of data over a socket now' or 'calculate those seven million numbers now'. If the 'now' cannot happen just now, but needs some more real time, this deterministic scheme needs to be broken up, so that logical time can match real time. My idea was that net objects give some feedback about whether they can send data 'now' or not. This would provide a solution for 'such' situations. As I said this applies not only to net objects, but also to file operations from [textfile], [soundfiler] etc and probably other stuff as well. So those situations arise always, when DSP is turned on and and something wants to be executed in 0 logical time, that needs more than <audio buffer size> real time. My guess was that Ivica was describing such a situation.
they fail because the implementation of those objects is not designed to handle those situations.
Without knowing What those situations are, I can't say.
I hope I could make myself clear. Excuse my clumsiness in describing the situation.
Roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de