hi martin, hi all
i ve been testing the new netpd-server based on the new [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad cpu-wise. this has some side-effects. in netpd, when a certain number of users are logged in (let's say 16), it can happen, that the traffic of those clients makes the netpd-server use more than the available cpu-time. i made some tests and checked, if all messages come through and if messages delivered by the server are still intact. under normal circumstances, there is no problem at all. but under heavy load, when the pd process is demanding more than available cpu time, some messages are corrupted or lost completely; in the worst case the pd process segfaults, at the moment of a client connecting or disconnecting. i guess, this is due to some buffer under- or overrun between pd and the tcp stack, but i don't really know. i wrote a benchmark patch and found out, that not only the new netpd-server patch performs badly, but quite some portion of bad performance comes from the new [tcpserver]. the testpatch compares perfomances of sending data to clients using [tcpserver] and [netserver] from maxlib. the version of [tcpserver] shipped with current pd-extended performs slightly better than [netserver] (tested on OS X and linux). however, the most recent version, that solves the tcp buffer overrun problem, performs ~11 times worse than [netserver]. is that the trade off from solving the other issue? or could this theoretically be improved?
unfortunately, i still don't have a netpd-server running, which can be considered stable. the current one doesn't crash anymore, because clients lost network connection, but it crashes, when there is too much traffic. respectively, it cannot be considered reliable, because it drops messages under certain circumstances.
@code-maintainers is anyone maintaining the code of [netserver] or maxlib in general? this object class still suffers from the 'buffer overrun -> pd hangs' problem. since the same problem was fixed for [tcpserver], it might not be too hard to port that fix to [netserver]. i am not able to dig into c sources, so i wanted to kindly ask here, if someone is interested to do it. [tcpsocketserver FUDI] was meant as a replacement for [netserver] in order to get rid of the pd hangs caused by it. however, now i am not sure anymore, if this approach was a good idea at all, since the overhead from implementing FUDI parsing and stuff in pd instead of in c seems to be enormeous.
roman
Do you have a good description of the problem with [netserver]? If so, I could take stab at it.
.hc
On Apr 26, 2009, at 8:27 PM, Roman Haefeli wrote:
hi martin, hi all
i ve been testing the new netpd-server based on the new [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad cpu-wise. this has some side-effects. in netpd, when a certain number of users are logged in (let's say 16), it can happen, that the traffic of those clients makes the netpd-server use more than the available cpu-time. i made some tests and checked, if all messages come through and if messages delivered by the server are still intact. under normal circumstances, there is no problem at all. but under heavy load, when the pd process is demanding more than available cpu time, some messages are corrupted or lost completely; in the worst case the pd process segfaults, at the moment of a client connecting or disconnecting. i guess, this is due to some buffer under- or overrun between pd and the tcp stack, but i don't really know. i wrote a benchmark patch and found out, that not only the new netpd-server patch performs badly, but quite some portion of bad performance comes from the new [tcpserver]. the testpatch compares perfomances of sending data to clients using [tcpserver] and [netserver] from maxlib. the version of [tcpserver] shipped with current pd- extended performs slightly better than [netserver] (tested on OS X and linux). however, the most recent version, that solves the tcp buffer overrun problem, performs ~11 times worse than [netserver]. is that the trade off from solving the other issue? or could this theoretically be improved?
unfortunately, i still don't have a netpd-server running, which can be considered stable. the current one doesn't crash anymore, because clients lost network connection, but it crashes, when there is too much traffic. respectively, it cannot be considered reliable, because it drops messages under certain circumstances.
@code-maintainers is anyone maintaining the code of [netserver] or maxlib in general? this object class still suffers from the 'buffer overrun -> pd hangs' problem. since the same problem was fixed for [tcpserver], it might not be too hard to port that fix to [netserver]. i am not able to dig into c sources, so i wanted to kindly ask here, if someone is interested to do it. [tcpsocketserver FUDI] was meant as a replacement for [netserver] in order to get rid of the pd hangs caused by it. however, now i am not sure anymore, if this approach was a good idea at all, since the overhead from implementing FUDI parsing and stuff in pd instead of in c seems to be enormeous.
roman < benchmark_server .pd
< benchmark_server_testclient .pd>_______________________________________________ Pd-dev mailing list Pd-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
----------------------------------------------------------------------------
Access to computers should be unlimited and total. - the hacker ethic
Or I forgot to mention, do you have a simple patch to reproduce the problem? That would be even better.
.hc
On Apr 26, 2009, at 8:27 PM, Roman Haefeli wrote:
hi martin, hi all
i ve been testing the new netpd-server based on the new [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad cpu-wise. this has some side-effects. in netpd, when a certain number of users are logged in (let's say 16), it can happen, that the traffic of those clients makes the netpd-server use more than the available cpu-time. i made some tests and checked, if all messages come through and if messages delivered by the server are still intact. under normal circumstances, there is no problem at all. but under heavy load, when the pd process is demanding more than available cpu time, some messages are corrupted or lost completely; in the worst case the pd process segfaults, at the moment of a client connecting or disconnecting. i guess, this is due to some buffer under- or overrun between pd and the tcp stack, but i don't really know. i wrote a benchmark patch and found out, that not only the new netpd-server patch performs badly, but quite some portion of bad performance comes from the new [tcpserver]. the testpatch compares perfomances of sending data to clients using [tcpserver] and [netserver] from maxlib. the version of [tcpserver] shipped with current pd- extended performs slightly better than [netserver] (tested on OS X and linux). however, the most recent version, that solves the tcp buffer overrun problem, performs ~11 times worse than [netserver]. is that the trade off from solving the other issue? or could this theoretically be improved?
unfortunately, i still don't have a netpd-server running, which can be considered stable. the current one doesn't crash anymore, because clients lost network connection, but it crashes, when there is too much traffic. respectively, it cannot be considered reliable, because it drops messages under certain circumstances.
@code-maintainers is anyone maintaining the code of [netserver] or maxlib in general? this object class still suffers from the 'buffer overrun -> pd hangs' problem. since the same problem was fixed for [tcpserver], it might not be too hard to port that fix to [netserver]. i am not able to dig into c sources, so i wanted to kindly ask here, if someone is interested to do it. [tcpsocketserver FUDI] was meant as a replacement for [netserver] in order to get rid of the pd hangs caused by it. however, now i am not sure anymore, if this approach was a good idea at all, since the overhead from implementing FUDI parsing and stuff in pd instead of in c seems to be enormeous.
roman < benchmark_server .pd
< benchmark_server_testclient .pd>_______________________________________________ Pd-dev mailing list Pd-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
----------------------------------------------------------------------------
As we enjoy great advantages from inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously. - Benjamin Franklin
hi hans On Mon, 2009-04-27 at 20:43 -0400, Hans-Christoph Steiner wrote:
Or I forgot to mention, do you have a simple patch to reproduce the problem? That would be even better.
yeah, the patch to reproduce it is pretty simple. however, the setup needed to trigger the problem is not so simple, since it requires two computers.
the problem is, that in tcp protocol a connection is considered to be existing for both ends, until both agree to terminate the connection. the problem with [netserver] arises, when there is no chance to communicate the termination, for instance when a client losts its network connection. such a condition might be caused by bad wifi signal, somebody unplugs the ethernet cable, etc.
now, if a client vanished without [netserver] noticing it, [netserver] will still try to send messages to this client. since those message cannot be delivered, they remain in the internal buffer of [netserver]. when its buffer is filled up, [netserver] will block the whole pd process, until its buffer gets emptied again. now, in a situation of a client losing its internet connection, the client might reconnect and gets assigned to a new socket, so that the buffer on the previous socket never gets emptied again and the pd process will hang forever.
a solution to handle this situation is needed. in [tcpserver], this was done by providing an additional 'status' outlet. after sending a message to one or all clients, it reports, if and how much of the data could be sent. this gives a big amount of control to the patch programmer, since it enables them to decide the best strategy in a certain situation. a patch can then either decide to simply disconnect the client; a buffer in the pd-patch can keep messages, until they can be delivered; or a patch could decide to simply drop messages, that cannot be sent in time. depending on the application, all of those strategies make sense. that is why i am in favor of an approach, that lets the patch programmer decide.
i attached two test patches, which are meant to run on two different boxes. start both, let the client connect to the server and then let the server send messages to the client. pull the ethernet plug and you'll trigger the problem pretty quickly. on my linux box, the server patch hangs pd exactly 242 after messages.
roman
On Apr 26, 2009, at 8:27 PM, Roman Haefeli wrote:
hi martin, hi all
i ve been testing the new netpd-server based on the new [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad cpu-wise. this has some side-effects. in netpd, when a certain number of users are logged in (let's say 16), it can happen, that the traffic of those clients makes the netpd-server use more than the available cpu-time. i made some tests and checked, if all messages come through and if messages delivered by the server are still intact. under normal circumstances, there is no problem at all. but under heavy load, when the pd process is demanding more than available cpu time, some messages are corrupted or lost completely; in the worst case the pd process segfaults, at the moment of a client connecting or disconnecting. i guess, this is due to some buffer under- or overrun between pd and the tcp stack, but i don't really know. i wrote a benchmark patch and found out, that not only the new netpd-server patch performs badly, but quite some portion of bad performance comes from the new [tcpserver]. the testpatch compares perfomances of sending data to clients using [tcpserver] and [netserver] from maxlib. the version of [tcpserver] shipped with current pd- extended performs slightly better than [netserver] (tested on OS X and linux). however, the most recent version, that solves the tcp buffer overrun problem, performs ~11 times worse than [netserver]. is that the trade off from solving the other issue? or could this theoretically be improved?
unfortunately, i still don't have a netpd-server running, which can be considered stable. the current one doesn't crash anymore, because clients lost network connection, but it crashes, when there is too much traffic. respectively, it cannot be considered reliable, because it drops messages under certain circumstances.
@code-maintainers is anyone maintaining the code of [netserver] or maxlib in general? this object class still suffers from the 'buffer overrun -> pd hangs' problem. since the same problem was fixed for [tcpserver], it might not be too hard to port that fix to [netserver]. i am not able to dig into c sources, so i wanted to kindly ask here, if someone is interested to do it. [tcpsocketserver FUDI] was meant as a replacement for [netserver] in order to get rid of the pd hangs caused by it. however, now i am not sure anymore, if this approach was a good idea at all, since the overhead from implementing FUDI parsing and stuff in pd instead of in c seems to be enormeous.
roman < benchmark_server .pd
< benchmark_server_testclient .pd>_______________________________________________ Pd-dev mailing list Pd-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
As we enjoy great advantages from inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously. - Benjamin Franklin
Roman Haefeli wrote:
i ve been testing the new netpd-server based on the new [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad cpu-wise. this has some side-effects. in netpd, when a certain number of users are logged in (let's say 16), it can happen, that the traffic of those clients makes the netpd-server use more than the available cpu-time. i made some tests and checked, if all messages come through and if messages delivered by the server are still intact. under normal circumstances, there is no problem at all. but under heavy load, when the pd process is demanding more than available cpu time, some messages are corrupted or lost completely; in the worst case the pd process segfaults, at the moment of a client connecting or disconnecting. i guess, this is due to some buffer under- or overrun between pd and the tcp stack, but i don't really know.
Hi Roman, Did you try using the new [timeout( message? The latest version of tcpserver defaults to a 1ms timeout, so if you have a bunch if disconnected clients, Pd will hang for 1ms each, which will quickly add up to more than the audio block time and then Pd will start thrashing and eventually die or become comatose, as it were. I think you need to experiment with different values for the timeout. Set it to zero and it should give the same results as the previous version; maybe try something around 100 instead of the default 1000 (it's in microseconds). The other way to fix this in the tcpserver source is to make a new thread for each client, but I'm afraid that will just open another can of worms/zombies.
Martin
On Thu, 2009-04-30 at 10:17 -0400, Martin Peach wrote:
Roman Haefeli wrote:
i ve been testing the new netpd-server based on the new [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad cpu-wise. this has some side-effects. in netpd, when a certain number of users are logged in (let's say 16), it can happen, that the traffic of those clients makes the netpd-server use more than the available cpu-time. i made some tests and checked, if all messages come through and if messages delivered by the server are still intact. under normal circumstances, there is no problem at all. but under heavy load, when the pd process is demanding more than available cpu time, some messages are corrupted or lost completely; in the worst case the pd process segfaults, at the moment of a client connecting or disconnecting. i guess, this is due to some buffer under- or overrun between pd and the tcp stack, but i don't really know.
Hi Roman, Did you try using the new [timeout( message? The latest version of tcpserver defaults to a 1ms timeout, so if you have a bunch if disconnected clients, Pd will hang for 1ms each, which will quickly add up to more than the audio block time and then Pd will start thrashing and eventually die or become comatose, as it were.
no, i haven't tried this parameter yet. but i sure will do and report back, when i can tell more about how it behaves.
i haven't fully understood, what it does and what it can be used for. could you elaborate that a bit more? yet it sounds a bit strange to me, that i need to tweak a networking object with a time value for correct operation.
I think you need to experiment with different values for the timeout.
ok
Set it to zero and it should give the same results as the previous version;
you, mean, [tcpserver] will hang pd, when the buffer of a certain socket is full? or do you mean the version, that cut off some parts of messages under certain circumstances?
maybe try something around 100 instead of the default 1000 (it's in microseconds). The other way to fix this in the tcpserver source is to make a new thread for each client, but I'm afraid that will just open another can of worms/zombies.
i hardly know anything about threading, but i guess that is what other servers do (e.g apache). also didn't i see a way around creating dynamically an instance of the protocol handling abstraction for each socket, which is, i guess, something similar to threading in the pd world (not technically, but conceptually).
roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
Roman Haefeli wrote:
On Thu, 2009-04-30 at 10:17 -0400, Martin Peach wrote:
Roman Haefeli wrote:
i ve been testing the new netpd-server based on the new [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad cpu-wise. this has some side-effects. in netpd, when a certain number of users are logged in (let's say 16), it can happen, that the traffic of those clients makes the netpd-server use more than the available cpu-time. i made some tests and checked, if all messages come through and if messages delivered by the server are still intact. under normal circumstances, there is no problem at all. but under heavy load, when the pd process is demanding more than available cpu time, some messages are corrupted or lost completely; in the worst case the pd process segfaults, at the moment of a client connecting or disconnecting. i guess, this is due to some buffer under- or overrun between pd and the tcp stack, but i don't really know.
Hi Roman, Did you try using the new [timeout( message? The latest version of tcpserver defaults to a 1ms timeout, so if you have a bunch if disconnected clients, Pd will hang for 1ms each, which will quickly add up to more than the audio block time and then Pd will start thrashing and eventually die or become comatose, as it were.
no, i haven't tried this parameter yet. but i sure will do and report back, when i can tell more about how it behaves.
i haven't fully understood, what it does and what it can be used for. could you elaborate that a bit more? yet it sounds a bit strange to me, that i need to tweak a networking object with a time value for correct operation.
When you send some message through tcpserver, the send routne first checks to see if it can be sent. The call to do this is a function known as "select", which has a timeout parameter. The select call returns as soon as the socket is available or the timeout expires, whichever comes first. If the socket is blocked, select would never return if there was no timeout. So I gave the call a default 1ms timeout.
This could all be done using threads as well but I just don't know when I'll have time to do it. I still don't see that it would solve your problem anyway, if your application insists on sending to disconnected clients, you would have lots of threads sitting around, and still get no feedback about the connection.
Martin
On Fri, 2009-05-01 at 09:16 -0400, Martin Peach wrote:
Roman Haefeli wrote:
On Thu, 2009-04-30 at 10:17 -0400, Martin Peach wrote:
Roman Haefeli wrote:
i ve been testing the new netpd-server based on the new [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad cpu-wise. this has some side-effects. in netpd, when a certain number of users are logged in (let's say 16), it can happen, that the traffic of those clients makes the netpd-server use more than the available cpu-time. i made some tests and checked, if all messages come through and if messages delivered by the server are still intact. under normal circumstances, there is no problem at all. but under heavy load, when the pd process is demanding more than available cpu time, some messages are corrupted or lost completely; in the worst case the pd process segfaults, at the moment of a client connecting or disconnecting. i guess, this is due to some buffer under- or overrun between pd and the tcp stack, but i don't really know.
Hi Roman, Did you try using the new [timeout( message? The latest version of tcpserver defaults to a 1ms timeout, so if you have a bunch if disconnected clients, Pd will hang for 1ms each, which will quickly add up to more than the audio block time and then Pd will start thrashing and eventually die or become comatose, as it were.
no, i haven't tried this parameter yet. but i sure will do and report back, when i can tell more about how it behaves.
i haven't fully understood, what it does and what it can be used for. could you elaborate that a bit more? yet it sounds a bit strange to me, that i need to tweak a networking object with a time value for correct operation.
When you send some message through tcpserver, the send routne first checks to see if it can be sent. The call to do this is a function known as "select", which has a timeout parameter. The select call returns as soon as the socket is available or the timeout expires, whichever comes first. If the socket is blocked, select would never return if there was no timeout. So I gave the call a default 1ms timeout.
ok. i think, i understand. thanks for the explanation.
This could all be done using threads as well but I just don't know when I'll have time to do it.
no hurry. it's not the case, that i know, that threading would help for the issues, i am experiencing. i just wanted to have my troubles reported. and i think, i read somewhere about server implementations, that they often use a separate thread for each socket.
I still don't see that it would solve your problem anyway, if your application insists on sending to disconnected clients, you would have lots of threads sitting around, and still get no feedback about the connection.
the only feedback needed: was something actually sent or not? if you (or the patch) _know_, that messages are not received by the other end, then you (the patch) can handle the situation somehow. anyway, that is the part that seems to be already working. by using the current [tcpserver], you notice, if the other end vanished or is still listening. the problems i currently encounter are coming from the fact, that the performance of the new version is probably 20 times worse than the version included in current stable pd-extended. for me its a problem, since with a certain sane number of clients connected (let's say 16), it already overloads the cpu of a 1.7GHz pentium m processor. why the big difference to the previous version?
roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
Roman Haefeli wrote:
On Fri, 2009-05-01 at 09:16 -0400, Martin Peach wrote:
Roman Haefeli wrote:
On Thu, 2009-04-30 at 10:17 -0400, Martin Peach wrote:
Roman Haefeli wrote:
i ve been testing the new netpd-server based on the new [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad cpu-wise. this has some side-effects. in netpd, when a certain number of users are logged in (let's say 16), it can happen, that the traffic of those clients makes the netpd-server use more than the available cpu-time. i made some tests and checked, if all messages come through and if messages delivered by the server are still intact. under normal circumstances, there is no problem at all. but under heavy load, when the pd process is demanding more than available cpu time, some messages are corrupted or lost completely; in the worst case the pd process segfaults, at the moment of a client connecting or disconnecting. i guess, this is due to some buffer under- or overrun between pd and the tcp stack, but i don't really know.
Hi Roman, Did you try using the new [timeout( message? The latest version of tcpserver defaults to a 1ms timeout, so if you have a bunch if disconnected clients, Pd will hang for 1ms each, which will quickly add up to more than the audio block time and then Pd will start thrashing and eventually die or become comatose, as it were.
no, i haven't tried this parameter yet. but i sure will do and report back, when i can tell more about how it behaves.
i haven't fully understood, what it does and what it can be used for. could you elaborate that a bit more? yet it sounds a bit strange to me, that i need to tweak a networking object with a time value for correct operation.
When you send some message through tcpserver, the send routne first checks to see if it can be sent. The call to do this is a function known as "select", which has a timeout parameter. The select call returns as soon as the socket is available or the timeout expires, whichever comes first. If the socket is blocked, select would never return if there was no timeout. So I gave the call a default 1ms timeout.
ok. i think, i understand. thanks for the explanation.
This could all be done using threads as well but I just don't know when I'll have time to do it.
no hurry. it's not the case, that i know, that threading would help for the issues, i am experiencing. i just wanted to have my troubles reported. and i think, i read somewhere about server implementations, that they often use a separate thread for each socket.
I still don't see that it would solve your problem anyway, if your application insists on sending to disconnected clients, you would have lots of threads sitting around, and still get no feedback about the connection.
the only feedback needed: was something actually sent or not? if you (or the patch) _know_, that messages are not received by the other end, then you (the patch) can handle the situation somehow. anyway, that is the part that seems to be already working. by using the current [tcpserver], you notice, if the other end vanished or is still listening. the problems i currently encounter are coming from the fact, that the performance of the new version is probably 20 times worse than the version included in current stable pd-extended. for me its a problem, since with a certain sane number of clients connected (let's say 16), it already overloads the cpu of a 1.7GHz pentium m processor. why the big difference to the previous version?
If you set the sending timeout to zero (by sending [timeout 0( message to [tcpserver] )then the performance should be the same as the older version. AFAIK that's all I changed. Did you try that yet? If not, something else is causing the slowdown. If it works better, maybe set the timeout to 10 instead of 1000.
Martin
On Fri, 2009-05-01 at 18:48 -0400, Martin Peach wrote:
Roman Haefeli wrote:
On Fri, 2009-05-01 at 09:16 -0400, Martin Peach wrote:
Roman Haefeli wrote:
On Thu, 2009-04-30 at 10:17 -0400, Martin Peach wrote:
Roman Haefeli wrote:
i ve been testing the new netpd-server based on the new [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad cpu-wise. this has some side-effects. in netpd, when a certain number of users are logged in (let's say 16), it can happen, that the traffic of those clients makes the netpd-server use more than the available cpu-time. i made some tests and checked, if all messages come through and if messages delivered by the server are still intact. under normal circumstances, there is no problem at all. but under heavy load, when the pd process is demanding more than available cpu time, some messages are corrupted or lost completely; in the worst case the pd process segfaults, at the moment of a client connecting or disconnecting. i guess, this is due to some buffer under- or overrun between pd and the tcp stack, but i don't really know.
Hi Roman, Did you try using the new [timeout( message? The latest version of tcpserver defaults to a 1ms timeout, so if you have a bunch if disconnected clients, Pd will hang for 1ms each, which will quickly add up to more than the audio block time and then Pd will start thrashing and eventually die or become comatose, as it were.
no, i haven't tried this parameter yet. but i sure will do and report back, when i can tell more about how it behaves.
i haven't fully understood, what it does and what it can be used for. could you elaborate that a bit more? yet it sounds a bit strange to me, that i need to tweak a networking object with a time value for correct operation.
When you send some message through tcpserver, the send routne first checks to see if it can be sent. The call to do this is a function known as "select", which has a timeout parameter. The select call returns as soon as the socket is available or the timeout expires, whichever comes first. If the socket is blocked, select would never return if there was no timeout. So I gave the call a default 1ms timeout.
ok. i think, i understand. thanks for the explanation.
This could all be done using threads as well but I just don't know when I'll have time to do it.
no hurry. it's not the case, that i know, that threading would help for the issues, i am experiencing. i just wanted to have my troubles reported. and i think, i read somewhere about server implementations, that they often use a separate thread for each socket.
I still don't see that it would solve your problem anyway, if your application insists on sending to disconnected clients, you would have lots of threads sitting around, and still get no feedback about the connection.
the only feedback needed: was something actually sent or not? if you (or the patch) _know_, that messages are not received by the other end, then you (the patch) can handle the situation somehow. anyway, that is the part that seems to be already working. by using the current [tcpserver], you notice, if the other end vanished or is still listening. the problems i currently encounter are coming from the fact, that the performance of the new version is probably 20 times worse than the version included in current stable pd-extended. for me its a problem, since with a certain sane number of clients connected (let's say 16), it already overloads the cpu of a 1.7GHz pentium m processor. why the big difference to the previous version?
If you set the sending timeout to zero (by sending [timeout 0( message to [tcpserver] )then the performance should be the same as the older version. AFAIK that's all I changed. Did you try that yet? If not, something else is causing the slowdown. If it works better, maybe set the timeout to 10 instead of 1000.
there is no difference in performance, no matter what value i use for 'timeout'. on my box, sending the message (in byte representation) from the benchmark test 1000 times takes ~90ms for [tcpserver]. the same (in ascii presentation) sent with [netserver] takes around 8ms. the only difference i can see with lower (< 10us) timeout value is, that messages on the receiving side (client) are messed up, completely lost, partially cut or concatenated together. on my box, the new [tcpserver] with 'timeout' set to 0 performs much worse than the old version with the buffer overrun problem.
have you tested on windows only? i haven't tried windows yet. how did you test?
roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
Roman Haefeli wrote:
On Fri, 2009-05-01 at 18:48 -0400, Martin Peach wrote:
Roman Haefeli wrote:
On Fri, 2009-05-01 at 09:16 -0400, Martin Peach wrote:
Roman Haefeli wrote:
On Thu, 2009-04-30 at 10:17 -0400, Martin Peach wrote:
Roman Haefeli wrote: > i ve been testing the new netpd-server based on the new > [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could > solve some problems, but some new ones were introduced. > > i found, that the most recent version of [tcpserver] peforms quite bad > cpu-wise. this has some side-effects. in netpd, when a certain number of > users are logged in (let's say 16), it can happen, that the traffic of > those clients makes the netpd-server use more than the available > cpu-time. i made some tests and checked, if all messages come through > and if messages delivered by the server are still intact. under normal > circumstances, there is no problem at all. but under heavy load, when > the pd process is demanding more than available cpu time, some messages > are corrupted or lost completely; in the worst case the pd process > segfaults, at the moment of a client connecting or disconnecting. i > guess, this is due to some buffer under- or overrun between pd and the > tcp stack, but i don't really know. Hi Roman, Did you try using the new [timeout( message? The latest version of tcpserver defaults to a 1ms timeout, so if you have a bunch if disconnected clients, Pd will hang for 1ms each, which will quickly add up to more than the audio block time and then Pd will start thrashing and eventually die or become comatose, as it were.
no, i haven't tried this parameter yet. but i sure will do and report back, when i can tell more about how it behaves.
i haven't fully understood, what it does and what it can be used for. could you elaborate that a bit more? yet it sounds a bit strange to me, that i need to tweak a networking object with a time value for correct operation.
When you send some message through tcpserver, the send routne first checks to see if it can be sent. The call to do this is a function known as "select", which has a timeout parameter. The select call returns as soon as the socket is available or the timeout expires, whichever comes first. If the socket is blocked, select would never return if there was no timeout. So I gave the call a default 1ms timeout.
ok. i think, i understand. thanks for the explanation.
This could all be done using threads as well but I just don't know when I'll have time to do it.
no hurry. it's not the case, that i know, that threading would help for the issues, i am experiencing. i just wanted to have my troubles reported. and i think, i read somewhere about server implementations, that they often use a separate thread for each socket.
I still don't see that it would solve your problem anyway, if your application insists on sending to disconnected clients, you would have lots of threads sitting around, and still get no feedback about the connection.
the only feedback needed: was something actually sent or not? if you (or the patch) _know_, that messages are not received by the other end, then you (the patch) can handle the situation somehow. anyway, that is the part that seems to be already working. by using the current [tcpserver], you notice, if the other end vanished or is still listening. the problems i currently encounter are coming from the fact, that the performance of the new version is probably 20 times worse than the version included in current stable pd-extended. for me its a problem, since with a certain sane number of clients connected (let's say 16), it already overloads the cpu of a 1.7GHz pentium m processor. why the big difference to the previous version?
If you set the sending timeout to zero (by sending [timeout 0( message to [tcpserver] )then the performance should be the same as the older version. AFAIK that's all I changed. Did you try that yet? If not, something else is causing the slowdown. If it works better, maybe set the timeout to 10 instead of 1000.
there is no difference in performance, no matter what value i use for 'timeout'. on my box, sending the message (in byte representation) from the benchmark test 1000 times takes ~90ms for [tcpserver]. the same (in ascii presentation) sent with [netserver] takes around 8ms. the only difference i can see with lower (< 10us) timeout value is, that messages on the receiving side (client) are messed up, completely lost, partially cut or concatenated together. on my box, the new [tcpserver] with 'timeout' set to 0 performs much worse than the old version with the buffer overrun problem.
Maybe just calling select slows everything down then. It seems to be a trade-off between speed and reliability. You should really send udp packets, then nothing hangs if the other end doesn't receive them. You could still have a low-bandwidth tcp connection open to test the connection.
have you tested on windows only? i haven't tried windows yet. how did you test?
I didn't test for speed at all, I just checked that it worked on WinXP and Debian.
Martin
On Mon, 2009-05-04 at 19:41 -0400, Martin Peach wrote:
Roman Haefeli wrote:
On Fri, 2009-05-01 at 18:48 -0400, Martin Peach wrote:
Roman Haefeli wrote:
On Fri, 2009-05-01 at 09:16 -0400, Martin Peach wrote:
Roman Haefeli wrote:
On Thu, 2009-04-30 at 10:17 -0400, Martin Peach wrote: > Roman Haefeli wrote: >> i ve been testing the new netpd-server based on the new >> [tcpserver]/[tcsocketserver FUDI] now for a while and definitely could >> solve some problems, but some new ones were introduced. >> >> i found, that the most recent version of [tcpserver] peforms quite bad >> cpu-wise. this has some side-effects. in netpd, when a certain number of >> users are logged in (let's say 16), it can happen, that the traffic of >> those clients makes the netpd-server use more than the available >> cpu-time. i made some tests and checked, if all messages come through >> and if messages delivered by the server are still intact. under normal >> circumstances, there is no problem at all. but under heavy load, when >> the pd process is demanding more than available cpu time, some messages >> are corrupted or lost completely; in the worst case the pd process >> segfaults, at the moment of a client connecting or disconnecting. i >> guess, this is due to some buffer under- or overrun between pd and the >> tcp stack, but i don't really know. > Hi Roman, > Did you try using the new [timeout( message? The latest version of > tcpserver defaults to a 1ms timeout, so if you have a bunch if > disconnected clients, Pd will hang for 1ms each, which will quickly add > up to more than the audio block time and then Pd will start thrashing > and eventually die or become comatose, as it were. no, i haven't tried this parameter yet. but i sure will do and report back, when i can tell more about how it behaves.
i haven't fully understood, what it does and what it can be used for. could you elaborate that a bit more? yet it sounds a bit strange to me, that i need to tweak a networking object with a time value for correct operation.
When you send some message through tcpserver, the send routne first checks to see if it can be sent. The call to do this is a function known as "select", which has a timeout parameter. The select call returns as soon as the socket is available or the timeout expires, whichever comes first. If the socket is blocked, select would never return if there was no timeout. So I gave the call a default 1ms timeout.
ok. i think, i understand. thanks for the explanation.
This could all be done using threads as well but I just don't know when I'll have time to do it.
no hurry. it's not the case, that i know, that threading would help for the issues, i am experiencing. i just wanted to have my troubles reported. and i think, i read somewhere about server implementations, that they often use a separate thread for each socket.
I still don't see that it would solve your problem anyway, if your application insists on sending to disconnected clients, you would have lots of threads sitting around, and still get no feedback about the connection.
the only feedback needed: was something actually sent or not? if you (or the patch) _know_, that messages are not received by the other end, then you (the patch) can handle the situation somehow. anyway, that is the part that seems to be already working. by using the current [tcpserver], you notice, if the other end vanished or is still listening. the problems i currently encounter are coming from the fact, that the performance of the new version is probably 20 times worse than the version included in current stable pd-extended. for me its a problem, since with a certain sane number of clients connected (let's say 16), it already overloads the cpu of a 1.7GHz pentium m processor. why the big difference to the previous version?
If you set the sending timeout to zero (by sending [timeout 0( message to [tcpserver] )then the performance should be the same as the older version. AFAIK that's all I changed. Did you try that yet? If not, something else is causing the slowdown. If it works better, maybe set the timeout to 10 instead of 1000.
there is no difference in performance, no matter what value i use for 'timeout'. on my box, sending the message (in byte representation) from the benchmark test 1000 times takes ~90ms for [tcpserver]. the same (in ascii presentation) sent with [netserver] takes around 8ms. the only difference i can see with lower (< 10us) timeout value is, that messages on the receiving side (client) are messed up, completely lost, partially cut or concatenated together. on my box, the new [tcpserver] with 'timeout' set to 0 performs much worse than the old version with the buffer overrun problem.
Maybe just calling select slows everything down then. It seems to be a trade-off between speed and reliability. You should really send udp packets, then nothing hangs if the other end doesn't receive them. You could still have a low-bandwidth tcp connection open to test the connection.
udp is no option for me (see previous mails). i really do need a working netpd-server and the good thing is, that the server doesn't necessarily needs to be written in pd. i think, i'll try the python road. i know a little python, whereas c is definitely too low level for me, altough it probably might be much more performant for what i want.
besides my situation, [tcpserver] generally isn't yet fully usuable under real world conditions, although it has been improved a lot ( thanks for all your efforts!!! ). for serious use, i think, the performance issue is a real problem. but i also encountered other troubles.
in particular, there are certain situations, where the pd process running the [tcpserver] based netpd-server segfaults. this happens usually, when: a) there is some net traffic going on, and b) a client connects or disconnects. i wasn't able to track the problem down, so i am not really sure, where the problem comes from, but the fact, that it only happens on connects or disconnects lets me assume, that it is somehow related to [tcpserver]. now i wonder: is it safe at any time to send whatever message to [tcpserver]? could it be, that [tcpserver] is 'confused', when a certain client disconnects, while [tcpserver] is sending data to this particular client?
this problem doesn't exist with the [netserver] based netpd-server actually this patch/external combo never ever segfaulted, as far as i remember, the only problems were a hanging pd process due to full buffer.
have you tested on windows only? i haven't tried windows yet. how did you test?
I didn't test for speed at all, I just checked that it worked on WinXP and Debian.
i posted a benchmark patch in first mail of this thread, if you're interested.
roman
___________________________________________________________ Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de