On 2020-09-25 00:30, Benjamin ~ b01 wrote:
hi,
I'm looking for a fast way to convert two 8 bit data to one 16 bit data in big lists a device send continuously packets of 16 000 bytes threw the network to udpreceive at the moment, to reconstitute from two bytes a 16 bit value, I'm using a list-drip and a counter to discriminate the MSB and LSB and do the *256 and + operation the aim is to feed a table with the result (see attached) and produce sound from it it works with packets of 2 000 bytes but pd freeze with bigger packets I'm wondering if there is a better way to achieve this ?
first things first: what is [invert]? i guess it's the same as [== 0] but with the bonus of requiring an external library. also, if you modulo-write the data into a table, then you could do faster by just taking the last N-elements (but the module-write might have just been for testing)
anyhow:
[list-drip] is a vanilla implementation of zexy's [repack], which is less powerful. especially [repack] allows you to repack two lists (of e.g. 2 elements) rather than single atoms, making the task at hand much easier to solve.
however, [repack] was written about20 years ago. [list-drip] was implemented about 11 years ago. in the meantime, vanilla has gotten [list store] which makes a vanilla implementation of [repack] much faster (it's still not as fast as the compiled [repack]; but as fast as it can possibly get if the iteration logic is in the patch)
anyhow, i did short implementations of various algorithms to compare them for benchmarking reasons:
packets, converting it to 16bit, using [repack] again to assemble a long list (half the length of the incoming list) and then using [array set] to store the data in a table (requires "zexy") 2. using [repack] to repartition the incoming list into convenient packets, converting it to 16bit, then using [tabwrite] to directly write each new element into a table (requires "zexy") 3. using [list store] to repartition the incoming list into convenient packets, converting it to 16bit, using [list store] again to assemble a long list (half the length of the incoming list) and then using [array set] to store the data in a table (vanilla solution; basically #1, replacing [repack] with [list store] and a counter) 4. using [list store] to repartition the incoming list into convenient packets, converting it to 16bit, then using [tabwrite] to directly write each new element into a table (vanilla solution; basically #2, replacing [repack] with [list store] and a counter) 5. using your implementation
doing quick benchmarks gives the following results:
| implemtation | time (length=100000) | |------------------|----------------------| | 1 (repack/array) | 1.8ms | | 2 (repack/tab) | 9.9ms | | 3 (list/array) | 8.6ms | | 4 (list/tab) | 9.8ms | | 5 (list-drip/tab)| 29.8ms |
all implementations show linear complexity.
there are two interesting observations.
the first and obvious one, is that [repack] is an order of magnitude faster than list-drip. also, all the other implentations are *much* faster than your list-drip implementation.
however, the second, more subtle observations is probably more important: your slow implementation takes only about 30ms for a list of 100000 elements. given that your lists are much shorter (you said: up to 16000 elements), so it should only take about 10ms to process such a list (on my computer). blocking the CPU for 10ms (or even 30ms), will required you to raise the buffer-size of Pd a bit; but nowhere do i experience anything like *freezing* Pd.
afaict, you should do more benchmarking to find out where the actual bottleneck(s) in your patch is (resp: are). it might *not* be the conversion from bytes to short integers.
gsmt IOhannes