moin moin,
On 2009-01-16 15:56:22, Mathieu Bouchard matju@artengine.ca appears to have written:
you have to use Pd's lists, and then it's 64 or 128 bits per char.
And then, in theory, Pd could adopt any internal rep, as long as file I/O and socket I/O is done the way it needs to be done.
... which (if I understand correctly) pushes the whole encoding mess onto the I/O layer, which I believe (based on many many past headaches trying to get the encoding support of the perlIO layer to work transparently on in-memory strings) is The Wrong Way To Do It (TM). Precisely the I/O layer is "low-level" in my sense, which means it ought to be bytes only. Encoding-dependent "character" units are higher-level and ought to be independent of that layer.
... except if you're building rsp. reading a persistent index for a large file, in which case tell() & seek() are likely to be a wee bit faster than parsing and counting variable-length-encoded characters ...
right.
... or calling malloc(), or doing pretty much any other low-level fiddly stuff ...
It doesn't matter much, as Pd patches wouldn't be doing malloc(). Furthermore, I expect that you have or you would have a function for converting a list to a C string in the proper encoding, so that externs that want to use your strings don't have to do for(i=0;...) a[i]=b[i] all of the time, but also because it's a good opportunity for introducing optional encoding conversion.
Leveraging vanilla pd means that I can't (easily) export any functions, since each external is supposed to be self-contained. Of course, it's easy to write such functions and offer them as "copy-in" replacements, or define function-body macros, etc. etc. ... to date, there have been no requests for such an API, and potential users have to write their own for-loops...
marmosets, Bryan