> (so the order of data arrival is guaranteed).
Well this is a design feature of UDP: there is no guarantee of whether packets are going to be received or in what order. If you use UDP, you MUST write your program in such a way that it is resilient to data loss and data order. If you don't, then you may encounter problems later down the line, when you are in less than ideal conditions.
This said, it seems that [mrpeach/net] is designed in such a way that out-of-order delivery occurs more often than with other systems. Good to test the resiliency of your system.
My brief experience with net objects (I could be wrong, but this is what I remember):
[netreceive] / [netsend]: they do the socket job in the audio thread. This gives the following deterministic behaviour: the message is written to the socket before the audio callback is performed. What it does not give is any deterministic guarantee about when the packet is going to go out of your interface or be delivered. So given the latter, I am not sure why the former matters.
[iemnet/udpsend] / [iemnet/updreceive]: threaded, for [udpsend] the packets are stored in memory from the audio thread. A worker thread reads them and writes them to the socket. Issue is the audio thread uses malloc() to store the stash the values in memory, so it may occasionally hang while waiting for the kernel to provide more memory.
[mrpeach/net] have not looked at the code in person, but it has been mentioned here that it uses multiple working threads. If threads are created within the audio thread (as opposed to using a fixed pool of workers that get "activated" from the audio thread), then this also will occasionally hang while waiting for the kernel.
None of the approaches above is workable on the platform I am working on (Bela), as - running under Xenomai - the usual constraints that apply to audio programming (no I/O, no allocation, no creating threads in the audio thread) are even more strict (i.e.: you REALLY need to follow these principles).
My tentative approach was to turn [netreceive] into a threaded object, using a lock-free queue beween the threads (the one provided by libpd), using ifdefs to reuse most of the existing vanilla code.
I am not quite happy with it yet: the code looks like a mess with all the ifdefs, and [netsend] is not working atm, but [netreceive] now can be safely used. I guess it would be better if I were to package it as an external, removing the ifdefs.
Giulio
From: Roman Haefeli <reduzent@gmail.com>
To: pd-list@lists.iem.at
Sent: Wednesday, 22 February 2017, 15:19
Subject: Re: [PD] netsend/netreceive questions ...
On Mit, 2017-02-22 at 15:41 +0100, IOhannes m zmoelnig wrote:
> mrpeach/net should block less than the built-in object, but in theory
> it
> might still block when spinning up to many threads.
> also mrpeach/net is prone to race-conditions, where one sending
> thread
> can overtake another sending thread (so the order of data arrival is
> not
> guaranteed). obviously mrpeach/net doesn't always exihibit that
> problem
> (else nobody would use it), but iirc i was able to trigger that
> behaviour in a lab situtation.
netpd - as an example of a non-lab situation - does trigger such
problems with mpreach/net. Last time I checked, it presented incoming
data as lists which suggests that it uses some auto-magic internal
delimiting function, but it does not, it relies on pure chance. It's a
misconception that the author refuses to address.
As far as I can tell, mrpeach/net suffers issues that iemnet does not.
I don't see any advantage in using mrpeach/net besides the fact that
Pd-l2ork / Purr Data - due to their Pd-extended heritage - come with
mrpeach and not with iemnet.
Roman