--- Martin Peach martin.peach@sympatico.ca schrieb am Di, 24.2.2009:
Roman Haefeli wrote:
On Mon, 2009-02-23 at 19:05 -0500, Martin Peach wrote:
Roman Haefeli wrote:
On Mon, 2009-02-23 at 21:03 +0000, Martin
Peach wrote:
OK I fixed it now in svn. It works on
debian. The select() call was not being done properly. Now I need to test it on Windows again.
hey, many thanks! it works. now i wonder, what
happens, if the message
is triggered: 'tcpserver_send_buf: client
1 not writeable'. does that
indicated, that the buffer is cleared? does it
mean, that when this
message appears, that at least one message
didn't come through?
Right now it means that the message is dropped. I
can't see a way of holding on to it that wouldn't end up crashing Pd eventually if you keep sending to an unconnected client.
do i understand correctly, that if the buffer is full,
there is a time
limit for it to become emptied and if it is not
emptied in that given
time interval, the content is cleared? if this is
true, i think, the one
second interval is way to short. for instance, if a
state dump happens
in netpd (probably several hundred messages), it could
well be, that the
connection is not fast enough send enough messages in
the given time, so
they would be dropped. i guess, for my on practice, i
change the code to
use a much longer time interval.
But then it would hold up the whole process for even longer.
what is not solved yet: similar to the previous
version, a drop-out
occurs, whenever a buffer overrun happens. unlike
before, it is not
possible, that pd hangs forever anymore (it will only
hang at most for
the given time limit), but there is still no mechanism
provided to
generally avoid drop-outs.
Better to have it output a message immediately that states it is unable to deliver the data.
somehow i need to design netpd in way, that as
soon as one single
message is lost, the connection should be shut
down and established
again, and the client should then again sync
with other clients.
otherwise very bad things could happen
(patches are not transmitted
completely and loading incomplete patches
causes pd crashing).
Well the easiest thing would be to have
[tcpserver] close the connection itself when that happens.
it's just too easy to trigger that. i think, it
would lead to too many
unwanted disconnects.
The next best would be to have it output a
message on a 'status' outlet that you could use to close the connection.
personally, i find this the much better idea.
Yes, I'm gonna work on that.
juhuu!..
before the change i could be sure, that either
all messages came through
or the server crashed at some point, if
messages could not be delivered.
now, since the server doesn't crash
anymore, i need to know, if messages
were dropped. how can i know?
At the moment it prints to the Pd window, which
isn't much use for control purposes. As I said, for me the easiest and most logical thing is to have the connection closed automatically, but then you have to keep track of the connection count to know whether it happened.
What do you think?
without knowing how hard it would be to implement, the
best solution IMO
(and the only one, that addresses all of above issues)
would be, if the
whole buffering would happen in the pd patch itself,
so that the patch
could adapt itself to the current network conditions.
translated into
features, this would mean, that [tcpserver] needs to
provide information
about its inner buffer state. the most simple and
probably most
effective thing i can think of, would be an additional
outlet, that
sends a bang every time, when the inner buffer is
completely emptied. i
don't know, if it has several buffers, one for
each client; if so, then
probably a number (socket number) would be more
appropriate than a bang.
this way, a patch can send only as many messages, as
the bandwidth
allows. also it would give the possibility to the
patch to decide, what
time interval of not being able to send messages is
appropriate to shut
down the connection. the time interval could be
dynamically set without
the need to change the code of [tcpserver].
The buffer is maintained by the TCP stack. There is no way of knowing if it is empty, only if it can accept more.
i see. even knowing that it accepts more would be good to know, i guess
i see, that implementing those features would make the
use of and the
programming around [tcpserver] much more complex,
although it would make
it much more powerful. personally, i am all for giving
the most control
to the patch programmer, since i believe, that only
then pd can be used
for robust programming. it's probably a matter, if
someone sees pd as a
fully featured programming language or rather as a
tool for fast
prototyping or a 'quick hacking-together' à
la 'reaktor'. both
expectations are valid, but speaking for myself, i
never found, that
things were _too_ low-level in pd. [tcpserver] is
actually a good example for explaining what i mean: it
was originally designed to tranport streams of data
between the server
and clients. in order to transport packet oriented
protocols,
[tcpserver] would have needed to be adapted
accordingly, while each
protocol would have required its own code. the fact,
that i can do all
that in pd, let's me implement those protocols,
that i personally need
(without touching the code of [tcpserver]). this way,
i can expand the
functionality of [tcpserver] myself. the same would go
for [tcpserver]
providing more info about its inner state: it enables
the patch
programmer to design a server around it for very
particular needs.
this is what i think. sorry, that got quite long
again.
what do you think?
Yes, I agree. I think a status outlet on the [tcpserver] could be extended later to have more messages. Some of the stuff that gets printed to the Pd window could go there and then it could be handled by the patch instead of the 'operator'. I don't want to keep adding more outlets, so it would output lists with a selector, like [comport].
i totally agree, that instead of adding more outlets it would be better to provide additional information on the same outlet with appropriate selector.
i am very happy to notice, that we agree and that you are willing to address the existing issue. many thanks for your help.
roman