thanks Roman for your explanation.
-question: is [iemnet/tcpserver] packaged in Pd-extended also threaded? because that one (on the mac) gives a overall time comparable with [netreceive].
measuring (with [timer]) the, what you call, roundtrip time for each 'phrase' gives these results :
[tcpclient]->[tcpserver] w10->mac: 0-100 msec, 80% between 4-10 msec. (25 sec for whole transfer) w10->pi : 30-53 msec, 95% between 36-43 msec. (150 sec) mac->w10: 53-300 msec,90% between 55-60 msec. (200 sec)
without sending back the 'ok' byte the transfer of all data takes less then 1 msec.
i don't know yet what 'being threaded' exactly means, but if you're right, these vast numbers tell me that communication with a threaded process is very costly.
about the ""chattiness" of my application layer protocol": my goal is an embedded system, raspberry-pi connected with a serial cable to an Arduino, where the Pi takes care of the wifi communication with a front-end Pd-patch.
the 'ok' byte is necessary to know for sure that the Arduino did indeed receive and process each 'phrase' of data.
rolf
Roman wrote: On Don, 2017-01-12 at 14:46 +0100, rolfm@dds.nl wrote:
i'm sending data using tcpserver & tcpclient over wifi with a router in a local network.
the source patch sends 6 bytes and then waits until a single byte comes back that signals the next 6 bytes can be sent. in total 3538 data recs of 6 bytes are sent.
what could be the reason for this behaviour?
The "chattiness" of your application layer protocol. You went for the least optimal way to achieve high throughput. As for why there is a difference between Pd's [netsend]/[netreceive] and iemnet's [tcpclient]/[tcpserver] I can only guess. I believe Pd's implementation is not threaded and is executed directly in the DSP thread, while iemnet classes are threaded. While using threads is often the better solution, because it avoids blocking Pd when the network buffer is full, I assume it takes a little wee bit more time for the message to be passed from the Pd's main thread to the networking thread and thus increases the overall round trip time slightly. Since your protocol requires a full round trip for every 6 bytes, the small increase in round trip is amplified by the high number of round trips required.
I could imagine that the situation might be improved by lowering the latency in Pd's audio settings, but that's not a very substantiated guess.
Imagine you use the protocol between two computers that are more apart from each other (i.e. not in the same LAN). An increase of round trip time from 1ms to 10ms would reduce your throughput by a factor of 10.
a bug?
I don't think so. My advice is to rethink your application layer protocol. Try to reduce the "multi-ping-pong" to just one ping (containing all requests in one message) from the requesting party, answered by a single pong message from the providing party.
Let me tell from my experience, the iemnet classes are performing _well enough_ (if not excellently). It's easy to create a patch that saturates a 100Mbit/s link without consuming much CPU.
Roman