Hans-Christoph Steiner wrote:
rest. But I would like to know if the OS generally buffers USB data. If the OS doesn't, I'll bet the hardware does somewhere.
i wouldn't bet. the OS _might_ buffer, the hardware _might_ buffer, and pd _might_ block and have audio-dropouts.
second, what about latency?
always does the data getting. That solves that problem. As for latency, there would be no difference in latency if a Pd object was polling a thread or polling a file in /dev/; that object would still be polling at the same rate. Adding a thread would just put extra code between the Pd object and the /dev/ file.
obviously putting things in a thread is extra code. however, polling a thread means reading data from a shared memory segment while polling the /dev/-file means a system call which might block.
You add a lot of complexity and more running code for no real gain. If you change the data in the middle of a poll interval, I think you'll be asking for trouble. The problem here is that I want multiple instances of an object to be able to output data from the same device. The data
iThink, this was the problem ck was addressing.
coming out of each instance should be exactly the same, or the chance of strange, hard to find bugs will be high. If a thread updates the data in between the poll intervals, then different instances will output different data in each cycle.
i cannot follow you here. when querying data from an external device, then this data can change any instance of time. if i want to use the same data twice then i should use the _same data_ twice and not query an external device 2 times and hope that the data has not changed. (i mean: use [hid]->[t a a] and not [t b b]=>[hid]->[hid])
apart from that, i think ck's master/slave concept also took care of data synchronisity.
Polling latency is not an issue except in rare, customized situations. The fastest that Windows and Mac OS X can output HID data is once every 10 ms. The linux kernel can do once per 1ms if you customize things, but its generally 10ms also. These times probably all apply to generic USB event data too.
there are 2 kinds of latency we have to think of: 1. is the latency between an event appearing at the sensors of the external device and the appropriate data output within pd. 2. pd is (among other things) used for audio-processing. therefore audio-latency is a big issue. so if you rely on the OS buffering the data (which imho is a not-so-clever thing to do, especially if you support OSs which are proprietary and where you have no chance to know beforehand which way they'll go) and it does not then you might block the entire pd-thread for 10ms (on w32), just waiting on the hid-data. this basically means goodbye to all uses of pd which require low latencies. (even blocking the system for 1ms is unacceptable; btw we are currently evaluating linux without hid-support since it is known to block the kernel for up to 0.5ms every now and then)
Plus Pd has a built-in scheduler, and we are writing Pd objects, so we should use the Pd scheduler, instead of an external one (i.e. threads).
what makes you think that? your OS has a built-in scheduler why not user that? i thought we were talking about data-acquisition and not data-processing.
The more threads we add to Pd, the more we take CPU time completely away from the Pd scheduler. A couple of threads probably won't matter, but if we start using a lot of threads, it will matter.
the thing with threads is, that they don't necessarily use CPU. if a thread is blocking for 5 seconds, then any other process (including pd!) can use the CPU in this 5seconds without worrying about the thread; if pd is blocking for 5 seconds then everybody will be switching to max/msp or reaktor or even worse.
so the only solution to use low-latency pd AND "hid" would be to run 2 instances of pd communicating via netsend or similar. these 2 instances will run in separate threads (sic!). but these threads will be far heavier than the tiny read-libusb-thread in your object. the communication between the pd instances will add a processing overhead (and latency, btw) too. and of course handling several instances of pd can be painful - even though the problem would be solved in pd-space ;-)
and now i cannot remember why i have foam around my mouth...
mfg.ads.r IOhannes