thanks Roman for your explanation.
-question: is [iemnet/tcpserver] packaged in Pd-extended also threaded? because that one (on the mac) gives a overall time comparable with [netreceive].
measuring (with [timer]) the, what you call, roundtrip time for each 'phrase' gives these results :
[tcpclient]->[tcpserver] w10->mac: 0-100 msec, 80% between 4-10 msec. (25 sec for whole transfer) w10->pi : 30-53 msec, 95% between 36-43 msec. (150 sec) mac->w10: 53-300 msec,90% between 55-60 msec. (200 sec)
without sending back the 'ok' byte the transfer of all data takes less then 1 msec.
i don't know yet what 'being threaded' exactly means, but if you're right, these vast numbers tell me that communication with a threaded process is very costly.
about the ""chattiness" of my application layer protocol": my goal is an embedded system, raspberry-pi connected with a serial cable to an Arduino, where the Pi takes care of the wifi communication with a front-end Pd-patch.
the 'ok' byte is necessary to know for sure that the Arduino did indeed receive and process each 'phrase' of data.
rolf
Roman wrote: On Don, 2017-01-12 at 14:46 +0100, rolfm@dds.nl wrote:
i'm sending data using tcpserver & tcpclient over wifi with a router in a local network.
the source patch sends 6 bytes and then waits until a single byte comes back that signals the next 6 bytes can be sent. in total 3538 data recs of 6 bytes are sent.
what could be the reason for this behaviour?
The "chattiness" of your application layer protocol. You went for the least optimal way to achieve high throughput. As for why there is a difference between Pd's [netsend]/[netreceive] and iemnet's [tcpclient]/[tcpserver] I can only guess. I believe Pd's implementation is not threaded and is executed directly in the DSP thread, while iemnet classes are threaded. While using threads is often the better solution, because it avoids blocking Pd when the network buffer is full, I assume it takes a little wee bit more time for the message to be passed from the Pd's main thread to the networking thread and thus increases the overall round trip time slightly. Since your protocol requires a full round trip for every 6 bytes, the small increase in round trip is amplified by the high number of round trips required.
I could imagine that the situation might be improved by lowering the latency in Pd's audio settings, but that's not a very substantiated guess.
Imagine you use the protocol between two computers that are more apart from each other (i.e. not in the same LAN). An increase of round trip time from 1ms to 10ms would reduce your throughput by a factor of 10.
a bug?
I don't think so. My advice is to rethink your application layer protocol. Try to reduce the "multi-ping-pong" to just one ping (containing all requests in one message) from the requesting party, answered by a single pong message from the providing party.
Let me tell from my experience, the iemnet classes are performing _well enough_ (if not excellently). It's easy to create a patch that saturates a 100Mbit/s link without consuming much CPU.
Roman
Hey Rolf
On Die, 2017-01-17 at 17:28 +0100, rolfm@dds.nl wrote:
thanks Roman for your explanation.
-question: is [iemnet/tcpserver] packaged in Pd-extended also threaded?
I think so, but I am not sure. It definitely uses an old version of iemnet.
because that one (on the mac) gives a overall time comparable with [netreceive].
The fact iemnet classes are threaded and Pd-vanilla net classes are not just popped to my mind as a difference between them. If that is also the reason for the different round-trip times is just an assumption of mine, which to me would be plausible, but don't consider it verified.
measuring (with [timer]) the, what you call, roundtrip time for each 'phrase' gives these results :
[tcpclient]->[tcpserver] w10->mac: 0-100 msec, 80% between 4-10 msec. (25 sec for whole transfer) w10->pi : 30-53 msec, 95% between 36-43 msec. (150 sec) mac->w10: 53-300 msec,90% between 55-60 msec. (200 sec)
without sending back the 'ok' byte the transfer of all data takes less then 1 msec.
Right, that's what I would have expected.
i don't know yet what 'being threaded' exactly means,
It means that certain parts of code run independently from the main routine. In the case of a real-time application like Pd, it often helps to put code that accesses non-deterministic I/O like hard drives or network devices into their own threads in order to avoid blocking of the DSP (main) process.
https://en.wikipedia.org/wiki/Thread_(computing)
but if you're right, these vast numbers tell me that communication with a threaded process is very costly.
Don't focus on the threaded part. It is probably a bit more expensive, but in most cases it helps a lot. You're now considering a case where the induced latency of the network is quite small, thus the induced latency of the threading looks big. Consider the case where you'd be using that protocol with a more distant device, let's say with a network induced round-trip of 10ms. The additional small latency induced by threading wouldn't carry any weight anymore.
about the ""chattiness" of my application layer protocol": my goal is an embedded system, raspberry-pi connected with a serial cable to an Arduino, where the Pi takes care of the wifi communication with a front-end Pd-patch.
the 'ok' byte is necessary to know for sure that the Arduino did indeed receive and process each 'phrase' of data.
And I still believe it is not necessary.
First of all consider the different nature between a serial line and a network connection. A serial line might have slow band width, but there won't be any (or only very small) transport latency, thus a ping-pong- like protocol might be suitable. However, a network connection always has a latency and in order to be able saturate the full available bandwidth, the protocol needs to be designed for that. TCP already ensures data integrity and data completeness, so you don't need to check every single 6-byte packet.
From what I understand, you currently use the RPi as a proxy between a computer on the same LAN and the attached Arduino. Messages are passed as-is directly from the network the serial line, the Arduino confirms, the confirmation byte is sent back to the computer, is that correct?
What I would is to send the data in one go from the computer to the RPI. You could add a start and an end tag, so that the RPi knows when transfer starts and when it is finished. Due to the nature of TCP, you can be sure that when the RPi has received the end tag, all data is transferred correctly, there is no need to double-check that. You can store the data in a [text], [list], [table] or whatever seems approrpiate. From then on, you can still use the old (current) protocol to transfer the data from the RPi to the Arduino through the serial line.
The reliability of this approach will be the same, but it will be _much_ faster.
Roman