Hey Rolf
On Die, 2017-01-17 at 17:28 +0100, rolfm@dds.nl wrote:
thanks Roman for your explanation.
-question: is [iemnet/tcpserver] packaged in Pd-extended also threaded?
I think so, but I am not sure. It definitely uses an old version of iemnet.
because that one (on the mac) gives a overall time comparable with [netreceive].
The fact iemnet classes are threaded and Pd-vanilla net classes are not just popped to my mind as a difference between them. If that is also the reason for the different round-trip times is just an assumption of mine, which to me would be plausible, but don't consider it verified.
measuring (with [timer]) the, what you call, roundtrip time for each 'phrase' gives these results :
[tcpclient]->[tcpserver] w10->mac: 0-100 msec, 80% between 4-10 msec. (25 sec for whole transfer) w10->pi : 30-53 msec, 95% between 36-43 msec. (150 sec) mac->w10: 53-300 msec,90% between 55-60 msec. (200 sec)
without sending back the 'ok' byte the transfer of all data takes less then 1 msec.
Right, that's what I would have expected.
i don't know yet what 'being threaded' exactly means,
It means that certain parts of code run independently from the main routine. In the case of a real-time application like Pd, it often helps to put code that accesses non-deterministic I/O like hard drives or network devices into their own threads in order to avoid blocking of the DSP (main) process.
https://en.wikipedia.org/wiki/Thread_(computing)
but if you're right, these vast numbers tell me that communication with a threaded process is very costly.
Don't focus on the threaded part. It is probably a bit more expensive, but in most cases it helps a lot. You're now considering a case where the induced latency of the network is quite small, thus the induced latency of the threading looks big. Consider the case where you'd be using that protocol with a more distant device, let's say with a network induced round-trip of 10ms. The additional small latency induced by threading wouldn't carry any weight anymore.
about the ""chattiness" of my application layer protocol": my goal is an embedded system, raspberry-pi connected with a serial cable to an Arduino, where the Pi takes care of the wifi communication with a front-end Pd-patch.
the 'ok' byte is necessary to know for sure that the Arduino did indeed receive and process each 'phrase' of data.
And I still believe it is not necessary.
First of all consider the different nature between a serial line and a network connection. A serial line might have slow band width, but there won't be any (or only very small) transport latency, thus a ping-pong- like protocol might be suitable. However, a network connection always has a latency and in order to be able saturate the full available bandwidth, the protocol needs to be designed for that. TCP already ensures data integrity and data completeness, so you don't need to check every single 6-byte packet.
From what I understand, you currently use the RPi as a proxy between a computer on the same LAN and the attached Arduino. Messages are passed as-is directly from the network the serial line, the Arduino confirms, the confirmation byte is sent back to the computer, is that correct?
What I would is to send the data in one go from the computer to the RPI. You could add a start and an end tag, so that the RPi knows when transfer starts and when it is finished. Due to the nature of TCP, you can be sure that when the RPi has received the end tag, all data is transferred correctly, there is no need to double-check that. You can store the data in a [text], [list], [table] or whatever seems approrpiate. From then on, you can still use the old (current) protocol to transfer the data from the RPi to the Arduino through the serial line.
The reliability of this approach will be the same, but it will be _much_ faster.
Roman