hey'aw.
Can anyone give me hand on handling Japanese character encodings in PD.
I feel like I've read every document on character encodings and still don't understand the mess....even for my own pragrams that work with text.
Is there a way to handle UTF-8 in PD?
thanks -august.
august wrote:
hey'aw.
Can anyone give me hand on handling Japanese character encodings in PD.
I feel like I've read every document on character encodings and still don't understand the mess....even for my own pragrams that work with text.
Is there a way to handle UTF-8 in PD?
hmm, depends on what you mean by "handling characters".
at least in Gem, you should be able to display unicode characters, by using the [string( message (in combination with [text3d]). the arguments are numbers which enumerate the glyphs in your font (and with unicode fonts this should map to unicode characters:) report a bug if it does not.
however, you might be talking of something totally different...
fgamdsr IOhannes
august wrote:
hey'aw.
Can anyone give me hand on handling Japanese character encodings in PD.
I feel like I've read every document on character encodings and still don't understand the mess....even for my own pragrams that work with text.
Is there a way to handle UTF-8 in PD?
hmm, depends on what you mean by "handling characters".
at least in Gem, you should be able to display unicode characters, by using the [string( message (in combination with [text3d]). the arguments are numbers which enumerate the glyphs in your font (and with unicode fonts this should map to unicode characters:) report a bug if it does not.
however, you might be talking of something totally different...
that is one part I was looking for. thank you.
are there also objects for handling conversions between character encodings? Or, an object to convert between utf8 or UCS-2 and the unicode char code numbers that GEM takes?
Is there a default character encoding for PD messages? I assume it is LATIN1 because I have seen umlauts in comments before(I think). It doesn't look like I can make comments in UTF8 encoded chars.
I have my char problems solved right now, but now as I discover more about the difficulties of character encodings and the treachery that ASCII has caused....I am just curious.
On Feb 10, 2009, at 3:14 PM, august wrote:
august wrote:
hey'aw.
Can anyone give me hand on handling Japanese character encodings
in PD.I feel like I've read every document on character encodings and
still don't understand the mess....even for my own pragrams that work with text.Is there a way to handle UTF-8 in PD?
hmm, depends on what you mean by "handling characters".
at least in Gem, you should be able to display unicode characters, by using the [string( message (in combination with [text3d]). the arguments are numbers which enumerate the glyphs in your font
(and with unicode fonts this should map to unicode characters:) report a bug if it does not.however, you might be talking of something totally different...
that is one part I was looking for. thank you.
are there also objects for handling conversions between character encodings? Or, an object to convert between utf8 or UCS-2 and the
unicode char code numbers that GEM takes?Is there a default character encoding for PD messages? I assume it is LATIN1 because I have seen umlauts in comments before(I think). It doesn't look like I can make comments in UTF8 encoded chars.
I have my char problems solved right now, but now as I discover more about the difficulties of character encodings and the treachery that ASCII has caused....I am just curious.
Its a weird bastard mix currrently of Latin1 and UTF-8. The Tk GUI
can handle UTF-8 and uses UTF-8 natively. The C side is basically
Latin1 but doesn't really check:
This is something that I would really like to have working properly in
Pd-devel. Tcl/Tk is natively UTF-8, so it seems that we should
support UTF-8 in Pd. Anyone feel like trying to fix it? I don't
understand encodings so well.
.hc
There is no way to peace, peace is the way. -A.J. Muste
morning all,
On 2009-02-11 03:04:34, Hans-Christoph Steiner hans@eds.org appears to have written:
On Feb 10, 2009, at 3:14 PM, august wrote:
august wrote:
hey'aw.
are there also objects for handling conversions between character encodings? Or, an object to convert between utf8 or UCS-2 and the unicode char code numbers that GEM takes?
Well, there are [bytes2wchars] and [wchars2bytes] in the newest [pdstring] library, which convert between multibyte encodings such as utf8 and your C library's wchar_t, which if I'm not entirely mistaken is a system-dependent encoding, but at least here (linux, glibc), it looks a heckuva lot like UCS-4.
Is there a default character encoding for PD messages? I assume it is LATIN1 because I have seen umlauts in comments before(I think). It doesn't look like I can make comments in UTF8 encoded chars.
I have my char problems solved right now, but now as I discover more about the difficulties of character encodings and the treachery that ASCII has caused....I am just curious.
Its a weird bastard mix currrently of Latin1 and UTF-8. The Tk GUI can handle UTF-8 and uses UTF-8 natively. The C side is basically Latin1 but doesn't really check:
Out of curiosity, I just checked with a variant of 'unibarf.pd' (attached as "barf-both.pd"), and for me, pd *does* display utf-8 strings correctly in message boxes (tested with umlauts äöü, as well as Greek πδ -- other characters can be tested with the [pdstring] help patches). Surprisingly (to me), I don't have to do anything special to get UTF-8 characters displayed correctly, but setting LC_CTYPE=en_US.UTF-8 causes a latin-1 message to be displayed improperly (characters disappear, but are still passed and present in raw byte form).
Setting LC_CTYPE=en_US.UTF-8 and re-loading "unibarf.pd" got me an odd error message from Pd though:
Pd: buffer space wasn't sufficient for long GUI string (repeated 3 times)
... this appears on stderr, rather than the console. I get the same message once for "barf-both.pd"; assumedly due to mis-parsing of the latin-1 message box(es).
This is something that I would really like to have working properly in Pd-devel. Tcl/Tk is natively UTF-8, so it seems that we should support UTF-8 in Pd. Anyone feel like trying to fix it? I don't understand encodings so well.
I don't know for sure, but I suspect one problem might be in the interpretation of user input -- I use latin-1 myself, so I can't judge whether the Tk GUI accepts UTF-8 input or not (I use [pdstring] or just hack the .pd file for my tests). If we want to be paranoid about things, we're likely to run into problems with symbols too; symbol identity (hash value and raw byte string) can change depending on whether the C internals use UTF-8 strings or not: this depends not only on what they get from the GUI, but also on how file data is interpreted, netsend/netreceive, etc etc... (mostly t_binbuf, I guess). UTF-8 should be largely safe for pd symbols, although I'm not sure whether backslash or brackets can appear as shift bytes for any characters: that could certainly cause problems.
As an experiment, you could try calling the following on Pd startup:
#include <locale.h>
setlocale(LC_ALL,""); /*-- set locale from environment --*/ setlocale(LC_NUMERIC,"C"); /*-- ... but leave floats alone! --*/
... and see what breaks (or doesn't) ;-) Alternatively, you can achieve pretty much the same effect with the "locale" external in userspace (see attached "uselocale.pd"). Of course, to test UTF-8 you should have your environment variables set accordingly (in particular LC_CTYPE, potentially via LANG):
bash$ export LC_CTYPE=en_DK.UTF-8 bash$ pd uselocale.pd barf-both.pd ##-- latin-1 displays incorrectly
bash$ export LC_CTYPE=en_DK.ISO-8859-1 bash$ pd uselocale.pd barf-both.pd ##-- all displays ok
If it turns out to work well, we can of course make a trivial "dummy" external out of it for use with "-lib" ...
marmosets, Bryan
On Feb 11, 2009, at 6:34 AM, Bryan Jurish wrote:
morning all,
On 2009-02-11 03:04:34, Hans-Christoph Steiner hans@eds.org
appears to have written:On Feb 10, 2009, at 3:14 PM, august wrote:
august wrote:
hey'aw.
are there also objects for handling conversions between character encodings? Or, an object to convert between utf8 or UCS-2 and the unicode char code numbers that GEM takes?
Well, there are [bytes2wchars] and [wchars2bytes] in the newest [pdstring] library, which convert between multibyte encodings such as utf8 and your C library's wchar_t, which if I'm not entirely
mistaken is a system-dependent encoding, but at least here (linux, glibc), it
looks a heckuva lot like UCS-4.Is there a default character encoding for PD messages? I assume it
is LATIN1 because I have seen umlauts in comments before(I think). It doesn't look like I can make comments in UTF8 encoded chars.I have my char problems solved right now, but now as I discover more about the difficulties of character encodings and the treachery that ASCII has caused....I am just curious.
Its a weird bastard mix currrently of Latin1 and UTF-8. The Tk GUI
can handle UTF-8 and uses UTF-8 natively. The C side is basically Latin1 but doesn't really check:Out of curiosity, I just checked with a variant of 'unibarf.pd' (attached as "barf-both.pd"), and for me, pd *does* display utf-8 strings correctly in message boxes (tested with umlauts äöü, as well
as Greek πδ -- other characters can be tested with the
[pdstring] help patches). Surprisingly (to me), I don't have to do anything special to get UTF-8 characters displayed correctly, but setting LC_CTYPE=en_US.UTF-8 causes a latin-1 message to be displayed
improperly (characters disappear, but are still passed and present in raw byte
form).
Hmm, I am not sure that UTF-8 really is well supported. Some chars
get thru, but many don't. Here's an example. I typed these chars in
a UTF-8 text editor as an png and a pd patch. Not quite the same.
Setting LC_CTYPE=en_US.UTF-8 and re-loading "unibarf.pd" got me an odd error message from Pd though:
Pd: buffer space wasn't sufficient for long GUI string (repeated 3 times)
... this appears on stderr, rather than the console. I get the same message once for "barf-both.pd"; assumedly due to mis-parsing of the latin-1 message box(es).
I am guessing that the above error comes from the fact that Pd is
written for latin1 where every char is always 1 byte, so sending UTF-8
could confuse things, since UTF-8 can have multi-byte chars.
This is something that I would really like to have working properly
in Pd-devel. Tcl/Tk is natively UTF-8, so it seems that we should
support UTF-8 in Pd. Anyone feel like trying to fix it? I don't understand encodings so well.I don't know for sure, but I suspect one problem might be in the interpretation of user input -- I use latin-1 myself, so I can't judge whether the Tk GUI accepts UTF-8 input or not (I use [pdstring] or
just hack the .pd file for my tests). If we want to be paranoid about things, we're likely to run into problems with symbols too; symbol identity (hash value and raw byte string) can change depending on whether the C internals use UTF-8 strings or not: this depends not
only on what they get from the GUI, but also on how file data is
interpreted, netsend/netreceive, etc etc... (mostly t_binbuf, I guess). UTF-8
should be largely safe for pd symbols, although I'm not sure whether
backslash or brackets can appear as shift bytes for any characters: that could certainly cause problems.
I don't know about the pd side, but Tcl/Tk is all UTF-8 natively, so
that is no problem.
As an experiment, you could try calling the following on Pd startup:
#include <locale.h>
setlocale(LC_ALL,""); /*-- set locale from environment --*/ setlocale(LC_NUMERIC,"C"); /*-- ... but leave floats alone! --*/
... and see what breaks (or doesn't) ;-) Alternatively, you can
achieve pretty much the same effect with the "locale" external in userspace
(see attached "uselocale.pd"). Of course, to test UTF-8 you should have
your environment variables set accordingly (in particular LC_CTYPE, potentially via LANG):bash$ export LC_CTYPE=en_DK.UTF-8 bash$ pd uselocale.pd barf-both.pd ##-- latin-1 displays incorrectly
bash$ export LC_CTYPE=en_DK.ISO-8859-1 bash$ pd uselocale.pd barf-both.pd ##-- all displays ok
If it turns out to work well, we can of course make a trivial "dummy" external out of it for use with "-lib" ...
Hmm, I tried this on Mac OS X and it didn't seem to make a
difference. Perhaps its a platform issue, though on this level, Mac
OS X is very much BSD, so I think it should work.
.hc
marmosets, Bryan
-- Bryan Jurish "There is *always* one more
bug." jurish@ling.uni-potsdam.de -Lubarsky's Law of Cybernetic
Entomology<barf-both.pd><uselocale.pd>
News is what people want to keep hidden and everything else is
publicity. - Bill Moyers
moin Hans, moin all,
On 2009-02-12 06:24:44, Hans-Christoph Steiner hans@eds.org appears to have written:
On Feb 11, 2009, at 6:34 AM, Bryan Jurish wrote:
for me, pd *does* display utf-8 strings correctly in message boxes (tested with umlauts äöü, as well as Greek πδ
Hmm, I am not sure that UTF-8 really is well supported. Some chars get thru, but many don't. Here's an example. I typed these chars in a UTF-8 text editor as an png and a pd patch. Not quite the same.
... I'm not really sure what (if anything) we can conclude from this. Maybe the text editor is making UTF-8 out of the keyboard input? The Pd patch itself is most cetainly not UTF-8 encoded, which makes me suspect that either (a) Pd is dropping non-printing shift bytes (IOhannes has pointed out similar goofiness in t_binbuf, but I thought it was only restricted to NUL bytes) or (b) Tk isn't receiving UTF-8 character codes at all (whether this is Tk's fault or a system configuration issue is another question). At least the latter should be testable with a few quick wish hacks...
Setting LC_CTYPE=en_US.UTF-8 and re-loading "unibarf.pd" got me an odd error message from Pd though:
Pd: buffer space wasn't sufficient for long GUI string (repeated 3 times)
I am guessing that the above error comes from the fact that Pd is written for latin1 where every char is always 1 byte, so sending UTF-8 could confuse things, since UTF-8 can have multi-byte chars.
Kinda; but why is it only the presence of *latin-1* message boxes that cause complaints about "long GUI strings" (try deleting the utf-8 message box & reloading: the error disappears). I think an error is certainly justified in this case (we're feeding a latin-1 encoded message box to a Pd using a UTF-8 locale); I was just surprised by the form the error took ;-)
I don't know for sure, but I suspect one problem might be in the interpretation of user input
I don't know about the pd side, but Tcl/Tk is all UTF-8 natively, so that is no problem.
Hmm... not sure what you mean by "natively" here... I mean, Perl uses UTF-8 as its "native" string encoding, but you can still manipulate byte strings, read & write files etc in other encodings too. If we're talking about user input and the Pd GUI, I think the main issue is how keyboard input is captured by Tk and passed on to Pd. If the keyboard input is being grabbed by Tk bind()ing KeyPress events, then maybe we just need to edit that bind() call... looks like the KeyPress relevant "%"-substitutions are (from the Tk bind() manpage):
%k - The keycode field from the event. Valid only for KeyPress and KeyRelease events.
%A - Substitutes the UNICODE character corresponding to the event, or the empty string if the event does not correspond to a UNICODE character (e.g. the shift key was pressed). XmbLookupString (or XLookupString when input method support is turned off) does all the work of translating from the event to a UNICODE character. Valid only for KeyPress and KeyRelease events.
%K - The keysym corresponding to the event, substituted as a textual string. Valid only for KeyPress and KeyRelease events.
%N - The keysym corresponding to the event, substituted as a decimal number. Valid only for KeyPress and KeyRelease events.
... so if we're lucky, we can just replace "%k" with "%A" and all will be good... except for file I/O, which will likely still be done at a raw byte level. At this point, all "pure" latin-1 patches will proceed to break (maybe just display problems, maybe more serious). If we say we're going whole-hog utf-8, we can say that it's the user's problem to recode any such files (e.g. with iconv or recode; I'm happy to help out with a few scripts); otherwise we might want to do something paranoid and try to guess a patch's encoding when it's loaded. Or we use locale-dependent functions, but that makes sharing patches harder between people using different locales. Or we use the XML-style solution and just save the encoding to use in the patch header ;-)
bash$ export LC_CTYPE=en_DK.UTF-8 bash$ pd uselocale.pd barf-both.pd ##-- latin-1 displays incorrectly
bash$ export LC_CTYPE=en_DK.ISO-8859-1 bash$ pd uselocale.pd barf-both.pd ##-- all displays ok
If it turns out to work well, we can of course make a trivial "dummy" external out of it for use with "-lib" ...
Hmm, I tried this on Mac OS X and it didn't seem to make a difference. Perhaps its a platform issue, though on this level, Mac OS X is very much BSD, so I think it should work.
The locale strategy also depends on what locales your system has installed. Here (linux/debian), I can see which locales are installed with:
bash$ locale -a
... I would expect goofiness trying to use "en_DK.UTF-8" if it's not been installed ...
marmosets, Bryan
On Feb 12, 2009, at 4:40 AM, Bryan Jurish wrote:
moin Hans, moin all,
On 2009-02-12 06:24:44, Hans-Christoph Steiner hans@eds.org
appears to have written:On Feb 11, 2009, at 6:34 AM, Bryan Jurish wrote:
for me, pd *does* display utf-8 strings correctly in message boxes (tested with umlauts äöü, as
well as Greek πδHmm, I am not sure that UTF-8 really is well supported. Some chars
get thru, but many don't. Here's an example. I typed these chars in a UTF-8 text editor as an png and a pd patch. Not quite the same.... I'm not really sure what (if anything) we can conclude from this. Maybe the text editor is making UTF-8 out of the keyboard input?
The Pd patch itself is most cetainly not UTF-8 encoded, which makes me
suspect that either (a) Pd is dropping non-printing shift bytes (IOhannes has pointed out similar goofiness in t_binbuf, but I thought it was only restricted to NUL bytes) or (b) Tk isn't receiving UTF-8 character
codes at all (whether this is Tk's fault or a system configuration issue is another question). At least the latter should be testable with a few quick wish hacks...
Pd does seem to measure the bytes of the string, measuring the UTF-8
shift bytes as chars. For exmaple, in barf-both.pd, the message box
of the utf-8 example is much longer than the text inside, while with
the latin1, it is the correct size.
I don't know if you have followed Pd-devel 0.41.4 at all, but I have
gotten to the point where the GUI is 100% Tcl/Tk so playing with this
stuff should be a lot easier. Check out the branch, if you would like
to try things.
Setting LC_CTYPE=en_US.UTF-8 and re-loading "unibarf.pd" got me an
odd error message from Pd though:Pd: buffer space wasn't sufficient for long GUI string (repeated 3 times)
I am guessing that the above error comes from the fact that Pd is written for latin1 where every char is always 1 byte, so sending
UTF-8 could confuse things, since UTF-8 can have multi-byte chars.Kinda; but why is it only the presence of *latin-1* message boxes that cause complaints about "long GUI strings" (try deleting the utf-8 message box & reloading: the error disappears). I think an error is certainly justified in this case (we're feeding a latin-1 encoded message box to a Pd using a UTF-8 locale); I was just surprised by the form the error took ;-)
I think that Tcl/Tk tries to guess the locale of the data coming in
from the network socket, then translate it to UTF-8 and back. Some of
the weirdness we are seeing could be related to that. In Pd-devel,
its much clearer, so it would be straightforward to play with this
encoding translation stuff, and perhaps turn it off. Ideally we could
have UTF-8 coming from Pd so that Tk doesn't need to do any
translation. That could speed up things like array/graph redrawing.
I don't know for sure, but I suspect one problem might be in the interpretation of user input
I don't know about the pd side, but Tcl/Tk is all UTF-8 natively, so that is no problem.
Hmm... not sure what you mean by "natively" here... I mean, Perl uses UTF-8 as its "native" string encoding, but you can still manipulate
byte strings, read & write files etc in other encodings too.
Yes, same idea. Internally, Tcl/Tk is using UTF-8, but it can freely
translate between other encodings.
If we're talking about user input and the Pd GUI, I think the main issue is how keyboard input is captured by Tk and passed on to Pd. If the keyboard input is being grabbed by Tk bind()ing KeyPress events, then maybe we just need to edit that bind() call... looks like the KeyPress relevant "%"-substitutions are (from the Tk bind() manpage):
%k - The keycode field from the event. Valid only for KeyPress and KeyRelease events.
%A - Substitutes the UNICODE character corresponding to the event, or the empty string if the event does not correspond to a UNICODE
character (e.g. the shift key was pressed). XmbLookupString (or XLookupString
when input method support is turned off) does all the work of translating from the event to a UNICODE character. Valid only for KeyPress and KeyRelease events.%K - The keysym corresponding to the event, substituted as a textual string. Valid only for KeyPress and KeyRelease events.
%N - The keysym corresponding to the event, substituted as a decimal number. Valid only for KeyPress and KeyRelease events.
... so if we're lucky, we can just replace "%k" with "%A" and all will be good... except for file I/O, which will likely still be done at a
raw byte level. At this point, all "pure" latin-1 patches will proceed to break (maybe just display problems, maybe more serious). If we say we're going whole-hog utf-8, we can say that it's the user's problem
to recode any such files (e.g. with iconv or recode; I'm happy to help
out with a few scripts); otherwise we might want to do something paranoid and try to guess a patch's encoding when it's loaded. Or we use locale-dependent functions, but that makes sharing patches harder between people using different locales. Or we use the XML-style solution and just save the encoding to use in the patch header ;-)
Yeah, this would be a good thing to rewrite. The canvas_key code is
definitely in need of refactoring anyway. Pd has never really
supported latin1 or any encoding besides ASCII, so I think we should
just aim to make everything UTF-8, then make conversion utilities like
you mentioned.
bash$ export LC_CTYPE=en_DK.UTF-8 bash$ pd uselocale.pd barf-both.pd ##-- latin-1 displays
incorrectlybash$ export LC_CTYPE=en_DK.ISO-8859-1 bash$ pd uselocale.pd barf-both.pd ##-- all displays ok
If it turns out to work well, we can of course make a trivial
"dummy" external out of it for use with "-lib" ...Hmm, I tried this on Mac OS X and it didn't seem to make a
difference. Perhaps its a platform issue, though on this level, Mac OS X is very much BSD, so I think it should work.The locale strategy also depends on what locales your system has installed. Here (linux/debian), I can see which locales are
installed with:bash$ locale -a
... I would expect goofiness trying to use "en_DK.UTF-8" if it's not been installed ...
I was using en_US.UTF-8. It seems to me that there is an extra dash
in your locale. On Mac OS X, 'locale -a' tells me: en_US.ISO8859-1
On debian/stable, it tells me en_US.iso88591. Does every system have
different names for the latin1? Arg.... I tried a bunch of
variations of the locale and LANG and LC_CTYPE on Mac OS X, but I
couldn't get the barf-both.pd to look different.
.hc
marmosets, Bryan
-- Bryan Jurish "There is *always* one more
bug." jurish@ling.uni-potsdam.de -Lubarsky's Law of Cybernetic
Entomology
As we enjoy great advantages from inventions of others, we should be
glad of an opportunity to serve others by any invention of ours; and
this we should do freely and generously. - Benjamin Franklin
Hi all,
I try to setup a computer that I got from someone to work with Pd-
extended on linux (I want to work with video openCV , pdp , etc... and
to develop some new openCV stuff with it).
I just installed on it Ubuntu hardy 8.04.2 64 bits versions, and I
hoped to be able to install the version compiled for Ubuntu hardy
available on the website puredata.info, but the package installer say:
STATUS: ERROR: WRONG ARCHITECTURE 'I386'
I checked on the mailing list archive and I see that there was some
messages in 2006 reporting problems for this processor with tabread
tabwrite and other.
So I have three questions:
compiled it myself (and is it possible?)
work?
computer ?
thanks,
On Feb 12, 2009, at 3:15 PM, Loic Kessous wrote:
Hi all,
I try to setup a computer that I got from someone to work with Pd- extended on linux (I want to work with video openCV , pdp , etc... and to develop some new openCV stuff with it).
I just installed on it Ubuntu hardy 8.04.2 64 bits versions, and I hoped to be able to install the version compiled for Ubuntu hardy available on the website puredata.info, but the package installer say:
STATUS: ERROR: WRONG ARCHITECTURE 'I386'
I checked on the mailing list archive and I see that there was some messages in 2006 reporting problems for this processor with tabread tabwrite and other.
So I have three questions:
- Is there a version already compiled for AMD 64 or should I try to
compiled it myself (and is it possible?)
Build it yourself, there currently aren't any official 64-bit binaries
because there is no 64-bit machine in the build farm. If someone
would either donate a 64-bit machine or figure out how to host a 64-
bit build on a 32-bit machine, then we could add official 64-bit builds.
- If I compile it myself, will there be some externals that may not
work?
Yes, check the archives to see which. Most do work fine though, its
just a few that don't/
- is it maybe a very bad idea to try to work with this AMD 64
computer ?
No, its good idea, many people are doing it.
.hc
thanks,
loic
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
"Free software means you control what your computer does. Non-free
software means someone else controls that, and to some extent controls
you." - Richard M. Stallman
morning all,
On 2009-02-12 20:22:22, Hans-Christoph Steiner hans@eds.org appears to have written:
On 2009-02-12 06:24:44, Hans-Christoph Steiner hans@eds.org appears to have written:
On Feb 11, 2009, at 6:34 AM, Bryan Jurish wrote:
for me, pd *does* display utf-8 strings correctly in message boxes (tested with umlauts äöü, as well as Greek πδ
Hmm, I am not sure that UTF-8 really is well supported. Some chars get thru, but many don't. Here's an example. I typed these chars in a UTF-8 text editor as an png and a pd patch. Not quite the same.
... I'm not really sure what (if anything) we can conclude from this. Maybe the text editor is making UTF-8 out of the keyboard input? The Pd patch itself is most cetainly not UTF-8 encoded, which makes me suspect that either (a) Pd is dropping non-printing shift bytes (IOhannes has pointed out similar goofiness in t_binbuf, but I thought it was only restricted to NUL bytes) or (b) Tk isn't receiving UTF-8 character codes at all (whether this is Tk's fault or a system configuration issue is another question). At least the latter should be testable with a few quick wish hacks...
Pd does seem to measure the bytes of the string, measuring the UTF-8 shift bytes as chars. For exmaple, in barf-both.pd, the message box of the utf-8 example is much longer than the text inside, while with the latin1, it is the correct size.
yup.
I don't know if you have followed Pd-devel 0.41.4 at all, but I have gotten to the point where the GUI is 100% Tcl/Tk so playing with this stuff should be a lot easier. Check out the branch, if you would like to try things.
soon.
Setting LC_CTYPE=en_US.UTF-8 and re-loading "unibarf.pd" got me an odd error message from Pd though:
Pd: buffer space wasn't sufficient for long GUI string (repeated 3 times)
I am guessing that the above error comes from the fact that Pd is written for latin1 where every char is always 1 byte, so sending UTF-8 could confuse things, since UTF-8 can have multi-byte chars.
Kinda; but why is it only the presence of *latin-1* message boxes that cause complaints about "long GUI strings" (try deleting the utf-8 message box & reloading: the error disappears). I think an error is certainly justified in this case (we're feeding a latin-1 encoded message box to a Pd using a UTF-8 locale); I was just surprised by the form the error took ;-)
I think that Tcl/Tk tries to guess the locale of the data coming in from the network socket, then translate it to UTF-8 and back. Some of the weirdness we are seeing could be related to that. In Pd-devel, its much clearer, so it would be straightforward to play with this encoding translation stuff, and perhaps turn it off. Ideally we could have UTF-8 coming from Pd so that Tk doesn't need to do any translation. That could speed up things like array/graph redrawing.
Are we certain that Tk is actually translating at all, and not just using some 8-bit default like latin-1 when it finds non-UTF-8 input? I ask because that's what Perl does by default, a behavior which continues to give me headaches. In Perl, each string has its own internal "utf8" flag which tells you whether Perl is currently thinking of that string as a raw byte-string in some unknown encoding or as a "native" (utf8) character string... I assume Tcl/Tk does something similar, but don't know how to test for this property there.
I don't know for sure, but I suspect one problem might be in the interpretation of user input
I don't know about the pd side, but Tcl/Tk is all UTF-8 natively, so that is no problem.
Hmm... not sure what you mean by "natively" here... I mean, Perl uses UTF-8 as its "native" string encoding, but you can still manipulate byte strings, read & write files etc in other encodings too.
Yes, same idea. Internally, Tcl/Tk is using UTF-8, but it can freely translate between other encodings.
see above.
If we're talking about user input and the Pd GUI, I think the main issue is how keyboard input is captured by Tk and passed on to Pd. If the keyboard input is being grabbed by Tk bind()ing KeyPress events, then maybe we just need to edit that bind() call... looks like the KeyPress relevant "%"-substitutions are (from the Tk bind() manpage):
[snip]
... I'm curious enough to try these out now... just have to dust off my long unused Tcl/Tk skills a bit ;-)
... so if we're lucky, we can just replace "%k" with "%A" and all will be good... except for file I/O, which will likely still be done at a raw byte level. At this point, all "pure" latin-1 patches will proceed to break (maybe just display problems, maybe more serious). If we say we're going whole-hog utf-8, we can say that it's the user's problem to recode any such files (e.g. with iconv or recode; I'm happy to help out with a few scripts); otherwise we might want to do something paranoid and try to guess a patch's encoding when it's loaded. Or we use locale-dependent functions, but that makes sharing patches harder between people using different locales. Or we use the XML-style solution and just save the encoding to use in the patch header ;-)
Yeah, this would be a good thing to rewrite. The canvas_key code is definitely in need of refactoring anyway. Pd has never really supported latin1 or any encoding besides ASCII, so I think we should just aim to make everything UTF-8, then make conversion utilities like you mentioned.
I'll have a look, but always in the past I've been scared off whenever I've tried to look deeper into Pd's Tk side.
bash$ export LC_CTYPE=en_DK.UTF-8 bash$ pd uselocale.pd barf-both.pd ##-- latin-1 displays incorrectly
bash$ export LC_CTYPE=en_DK.ISO-8859-1 bash$ pd uselocale.pd barf-both.pd ##-- all displays ok
If it turns out to work well, we can of course make a trivial "dummy" external out of it for use with "-lib" ...
Hmm, I tried this on Mac OS X and it didn't seem to make a difference. Perhaps its a platform issue, though on this level, Mac OS X is very much BSD, so I think it should work.
The locale strategy also depends on what locales your system has installed. Here (linux/debian), I can see which locales are installed with:
bash$ locale -a
... I would expect goofiness trying to use "en_DK.UTF-8" if it's not been installed ...
I was using en_US.UTF-8. It seems to me that there is an extra dash in your locale. On Mac OS X, 'locale -a' tells me: en_US.ISO8859-1 On debian/stable, it tells me en_US.iso88591. Does every system have different names for the latin1? Arg.... I tried a bunch of variations of the locale and LANG and LC_CTYPE on Mac OS X, but I couldn't get the barf-both.pd to look different.
curioser and curioser. I think on debian both "iso88591" and "ISO-8859-1" should work as charmaps. Similary, both "utf8" and "UTF-8" ought to work. The locale(1) manpage says:
FILES /usr/share/i18n/SUPPORTED List of supported values (and their associated encoding) for the locale name. This representation is recommended over --all-locales one, due being the system wide supported values.
... /usr/share/i18n/SUPPORTED (and /etc/locale.gen) includes for example
"ISO-8859-1", but not "iso88591". locale -a
on the other hand outputs
"iso88591" but not "ISO-8859-1". I'm not sure whether the relevant
standard (ISO/IEC 9945 aka POSIX?) says anything about the form that
charmap names have to take. Looking at
http://faqs.cs.uu.nl/na-dir/internationalization/iso-8859-1-charset.html,
I find:
"Currently, each system vendor has his own set of locale names, which makes portability a bit problematic."
Bummer.
marmosets, Bryan
On Thu, 12 Feb 2009, Bryan Jurish wrote:
morning all,
On 2009-02-12 20:22:22, Hans-Christoph Steiner hans@eds.org appears to have written:
On 2009-02-12 06:24:44, Hans-Christoph Steiner hans@eds.org appears to have written:
On Feb 11, 2009, at 6:34 AM, Bryan Jurish wrote:
for me, pd *does* display utf-8 strings correctly in message boxes (tested with umlauts äöü, as well as Greek πδ
Hmm, I am not sure that UTF-8 really is well supported. Some chars get thru, but many don't. Here's an example. I typed these chars in a UTF-8 text editor as an png and a pd patch. Not quite the same.
... I'm not really sure what (if anything) we can conclude from this. Maybe the text editor is making UTF-8 out of the keyboard input? The Pd patch itself is most cetainly not UTF-8 encoded, which makes me suspect that either (a) Pd is dropping non-printing shift bytes (IOhannes has pointed out similar goofiness in t_binbuf, but I thought it was only restricted to NUL bytes) or (b) Tk isn't receiving UTF-8 character codes at all (whether this is Tk's fault or a system configuration issue is another question). At least the latter should be testable with a few quick wish hacks...
Pd does seem to measure the bytes of the string, measuring the UTF-8 shift bytes as chars. For exmaple, in barf-both.pd, the message box of the utf-8 example is much longer than the text inside, while with the latin1, it is the correct size.
yup.
I don't know if you have followed Pd-devel 0.41.4 at all, but I have gotten to the point where the GUI is 100% Tcl/Tk so playing with this stuff should be a lot easier. Check out the branch, if you would like to try things.
soon.
Setting LC_CTYPE=en_US.UTF-8 and re-loading "unibarf.pd" got me an odd error message from Pd though:
Pd: buffer space wasn't sufficient for long GUI string (repeated 3 times)
I am guessing that the above error comes from the fact that Pd is written for latin1 where every char is always 1 byte, so sending UTF-8 could confuse things, since UTF-8 can have multi-byte chars.
Kinda; but why is it only the presence of *latin-1* message boxes that cause complaints about "long GUI strings" (try deleting the utf-8 message box & reloading: the error disappears). I think an error is certainly justified in this case (we're feeding a latin-1 encoded message box to a Pd using a UTF-8 locale); I was just surprised by the form the error took ;-)
I think that Tcl/Tk tries to guess the locale of the data coming in from the network socket, then translate it to UTF-8 and back. Some of the weirdness we are seeing could be related to that. In Pd-devel, its much clearer, so it would be straightforward to play with this encoding translation stuff, and perhaps turn it off. Ideally we could have UTF-8 coming from Pd so that Tk doesn't need to do any translation. That could speed up things like array/graph redrawing.
Are we certain that Tk is actually translating at all, and not just using some 8-bit default like latin-1 when it finds non-UTF-8 input? I ask because that's what Perl does by default, a behavior which continues to give me headaches. In Perl, each string has its own internal "utf8" flag which tells you whether Perl is currently thinking of that string as a raw byte-string in some unknown encoding or as a "native" (utf8) character string... I assume Tcl/Tk does something similar, but don't know how to test for this property there.
Here's the doc that I read on this topic, but it probably doesn't have the lvel of detail that you require:
http://tcl.tk/man/tcl8.5/TclCmd/fconfigure.htm#M8
As for Tk hacking for Pd, a big part of the pd-devel effort is to make the Tk GUI code readable, and even extendable! Feel free to hit me with questions, either here, or I am in #dataflow quite a bit these days.
.hc
I don't know for sure, but I suspect one problem might be in the interpretation of user input
I don't know about the pd side, but Tcl/Tk is all UTF-8 natively, so that is no problem.
Hmm... not sure what you mean by "natively" here... I mean, Perl uses UTF-8 as its "native" string encoding, but you can still manipulate byte strings, read & write files etc in other encodings too.
Yes, same idea. Internally, Tcl/Tk is using UTF-8, but it can freely translate between other encodings.
see above.
If we're talking about user input and the Pd GUI, I think the main issue is how keyboard input is captured by Tk and passed on to Pd. If the keyboard input is being grabbed by Tk bind()ing KeyPress events, then maybe we just need to edit that bind() call... looks like the KeyPress relevant "%"-substitutions are (from the Tk bind() manpage):
[snip]
... I'm curious enough to try these out now... just have to dust off my long unused Tcl/Tk skills a bit ;-)
... so if we're lucky, we can just replace "%k" with "%A" and all will be good... except for file I/O, which will likely still be done at a raw byte level. At this point, all "pure" latin-1 patches will proceed to break (maybe just display problems, maybe more serious). If we say we're going whole-hog utf-8, we can say that it's the user's problem to recode any such files (e.g. with iconv or recode; I'm happy to help out with a few scripts); otherwise we might want to do something paranoid and try to guess a patch's encoding when it's loaded. Or we use locale-dependent functions, but that makes sharing patches harder between people using different locales. Or we use the XML-style solution and just save the encoding to use in the patch header ;-)
Yeah, this would be a good thing to rewrite. The canvas_key code is definitely in need of refactoring anyway. Pd has never really supported latin1 or any encoding besides ASCII, so I think we should just aim to make everything UTF-8, then make conversion utilities like you mentioned.
I'll have a look, but always in the past I've been scared off whenever I've tried to look deeper into Pd's Tk side.
bash$ export LC_CTYPE=en_DK.UTF-8 bash$ pd uselocale.pd barf-both.pd ##-- latin-1 displays incorrectly
bash$ export LC_CTYPE=en_DK.ISO-8859-1 bash$ pd uselocale.pd barf-both.pd ##-- all displays ok
If it turns out to work well, we can of course make a trivial "dummy" external out of it for use with "-lib" ...
Hmm, I tried this on Mac OS X and it didn't seem to make a difference. Perhaps its a platform issue, though on this level, Mac OS X is very much BSD, so I think it should work.
The locale strategy also depends on what locales your system has installed. Here (linux/debian), I can see which locales are installed with:
bash$ locale -a
... I would expect goofiness trying to use "en_DK.UTF-8" if it's not been installed ...
I was using en_US.UTF-8. It seems to me that there is an extra dash in your locale. On Mac OS X, 'locale -a' tells me: en_US.ISO8859-1 On debian/stable, it tells me en_US.iso88591. Does every system have different names for the latin1? Arg.... I tried a bunch of variations of the locale and LANG and LC_CTYPE on Mac OS X, but I couldn't get the barf-both.pd to look different.
curioser and curioser. I think on debian both "iso88591" and "ISO-8859-1" should work as charmaps. Similary, both "utf8" and "UTF-8" ought to work. The locale(1) manpage says:
FILES /usr/share/i18n/SUPPORTED List of supported values (and their associated encoding) for the locale name. This representation is recommended over --all-locales one, due being the system wide supported values.
... /usr/share/i18n/SUPPORTED (and /etc/locale.gen) includes for example "ISO-8859-1", but not "iso88591".
locale -a
on the other hand outputs "iso88591" but not "ISO-8859-1". I'm not sure whether the relevant standard (ISO/IEC 9945 aka POSIX?) says anything about the form that charmap names have to take. Looking at http://faqs.cs.uu.nl/na-dir/internationalization/iso-8859-1-charset.html, I find:"Currently, each system vendor has his own set of locale names, which makes portability a bit problematic."
Bummer.
marmosets, Bryan -- Bryan Jurish "There is *always* one more bug." jurish@ling.uni-potsdam.de -Lubarsky's Law of Cybernetic Entomology
zen
\
\
\[D[D[D[D
moin all,
On 2009-02-13 03:14:20, Hans-Christoph Steiner hans@eds.org appears to have written:
On Thu, 12 Feb 2009, Bryan Jurish wrote:
Are we certain that Tk is actually translating at all, and not just using some 8-bit default like latin-1 when it finds non-UTF-8 input? I ask because that's what Perl does by default, a behavior which continues to give me headaches. In Perl, each string has its own internal "utf8" flag which tells you whether Perl is currently thinking of that string as a raw byte-string in some unknown encoding or as a "native" (utf8) character string... I assume Tcl/Tk does something similar, but don't know how to test for this property there.
Here's the doc that I read on this topic, but it probably doesn't have the lvel of detail that you require:
Had a look at that last night, but the 'fconfigure' command only applies to Tcl streams (analagous to the PerlIO layer, which I abhore and try my best to avoid, as it doesn't provide a sufficient level of control for most of my purposes... fconfigure be ok for Pd-devel if we say we're dealing exclusively with utf-8... but then again, I don't know if Tcl streams ("channels") are used at all by the GUI... maybe on the socket to the backend, but that's probably it; IMHO it's safer to explicitly generate byte strings in a known encoding and just pass those around).
Also useful is the 'encoding' command family ('encoding convertfrom', 'encoding convertto', 'encoding names', 'encoding system'). Tried this with some expicit escapes as well as a tester widget from http://en.wikibooks.org/wiki/Tcl_Programming/Internationalization, and I get decent display (Japanese still doesn't display with any Tk fonts I tried, but I think that's just a font problem). Also tested the bind substitutions with a dummy "puts" script, and managed to get real utf-8 sent out over the stdout channel for keyboard input. Still not 100% sure how well it's working, since my keyboard only produces latin-1 symbols (maybe I'll hack my xmodmap for some real testing ;-)
Unfortunately, I still haven't found a way to get Tcl to tell me what encoding (if any) it thinks a given string is using, analagous to the Perl predicate "utf8::is_utf8($string)". Maybe Tcl doesn't track this information on a per-string level at all, but assumes [encoding system] for all strings? That seems pretty inflexible to me, but after another look at http://www.tcl.tk/man/tcl8.5/TclCmd/encoding.htm , it does indeed seem to be the case. So I guess the only safe way to handle things is (as you suggest) to select an internal encoding (e.g. UTF-8) and enforce its use with {encoding system "utf-8"}, and possibly {fconfigure $ch -encoding "utf-8"} for whatever channels we want. The fconfigure manpage says the default channel encoding is [encoding system]; but I suspect that perhaps it's really the value of [encoding system] at the time of the channel's opening which has an effect, so we either have to make some accommodations for the standard channels (stdin,stdout,stderr), or just leave that up to Tcl (which probably defaults to the current locale's LC_CTYPE, but I haven't tested that yet)...
As for Tk hacking for Pd, a big part of the pd-devel effort is to make the Tk GUI code readable, and even extendable! Feel free to hit me with questions, either here, or I am in #dataflow quite a bit these days.
Groovy. I don't think I'll make the devel meeting today, but it's beginning to look as if I've got a bit of a bug in my bonnet about this ;-)
marmosets, Bryan
On Feb 13, 2009, at 4:38 AM, Bryan Jurish wrote:
moin all,
On 2009-02-13 03:14:20, Hans-Christoph Steiner hans@eds.org
appears to have written:On Thu, 12 Feb 2009, Bryan Jurish wrote:
Are we certain that Tk is actually translating at all, and not just using some 8-bit default like latin-1 when it finds non-UTF-8
input? I ask because that's what Perl does by default, a behavior which
continues to give me headaches. In Perl, each string has its own internal
"utf8" flag which tells you whether Perl is currently thinking of that
string as a raw byte-string in some unknown encoding or as a
"native" (utf8) character string... I assume Tcl/Tk does something similar, but
don't know how to test for this property there.Here's the doc that I read on this topic, but it probably doesn't
have the lvel of detail that you require:Had a look at that last night, but the 'fconfigure' command only
applies to Tcl streams (analagous to the PerlIO layer, which I abhore and
try my best to avoid, as it doesn't provide a sufficient level of control for most of my purposes... fconfigure be ok for Pd-devel if we say we're dealing exclusively with utf-8... but then again, I don't know if Tcl streams ("channels") are used at all by the GUI... maybe on the socket to the backend, but that's probably it; IMHO it's safer to explicitly generate byte strings in a known encoding and just pass those around).Also useful is the 'encoding' command family ('encoding convertfrom', 'encoding convertto', 'encoding names', 'encoding system'). Tried
this with some expicit escapes as well as a tester widget from http://en.wikibooks.org/wiki/Tcl_Programming/Internationalization,
and I get decent display (Japanese still doesn't display with any Tk fonts I tried, but I think that's just a font problem). Also tested the bind substitutions with a dummy "puts" script, and managed to get real
utf-8 sent out over the stdout channel for keyboard input. Still not 100% sure how well it's working, since my keyboard only produces latin-1 symbols (maybe I'll hack my xmodmap for some real testing ;-)Unfortunately, I still haven't found a way to get Tcl to tell me what encoding (if any) it thinks a given string is using, analagous to the Perl predicate "utf8::is_utf8($string)". Maybe Tcl doesn't track this information on a per-string level at all, but assumes [encoding
system] for all strings? That seems pretty inflexible to me, but after
another look at http://www.tcl.tk/man/tcl8.5/TclCmd/encoding.htm , it does indeed seem to be the case. So I guess the only safe way to handle things is (as you suggest) to select an internal encoding (e.g. UTF-8) and enforce its use with {encoding system "utf-8"}, and possibly {fconfigure $ch -encoding "utf-8"} for whatever channels we want. The fconfigure manpage says the default channel encoding is [encoding system]; but I suspect that perhaps it's really the value of [encoding system] at the time of the channel's opening which has an effect, so
we either have to make some accommodations for the standard channels (stdin,stdout,stderr), or just leave that up to Tcl (which probably defaults to the current locale's LC_CTYPE, but I haven't tested that yet)...As for Tk hacking for Pd, a big part of the pd-devel effort is to
make the Tk GUI code readable, and even extendable! Feel free to hit me
with questions, either here, or I am in #dataflow quite a bit these days.Groovy. I don't think I'll make the devel meeting today, but it's beginning to look as if I've got a bit of a bug in my bonnet about
this ;-)
Hey,
Its good to see someone iwlling to dive in deep. It'll be great to
have full UTF-8 support. Patko and I were looking into how to do it
on the C side, I think what you mentioned, using locale.h and
setlocale() should be enough. Maybe patko will chime in with some
details.
.hc
marmosets, Bryan
-- Bryan Jurish "There is *always* one more
bug." jurish@ling.uni-potsdam.de -Lubarsky's Law of Cybernetic
Entomology
Programs should be written for people to read, and only incidentally
for machines to execute.
morning Hans, morning list,
So I've tried to get the pd-devel 0.41.4 branch to use UTF-8 across the board. The TK side was easy (as Hans predicted); really just a call to {fconfigure} in ::pd_connect::configure_socket. I also set the output encoding to UTF-8 on Tk's stdout and stderr, for debugging purposes; it's probably wisest to leave those encodings at the default (user's current locale LC_CTYPE) for a release-like version.
The C side is much hairier. I think I've got things basically working (at least for message boxes and comments), but it has so far required changes in:
FILE: g_editor.c
UTF-8 symbol-strings rather than single-byte stringlets.
passed from Tk; a safer (and not too hard) way would be to pass the actual UTF-8 string from Tk and just copy that: this would avoid the m_pd.c hacks forcing LC_CTYPE=en_US.UTF-8 (see below). Another option would be actually just writing (or borrowing) the code to generate UTF-8 strings from Unicode codepoints. It's pretty simple stuff; I've still got the guts of it somewhere (only written for latin-1 so far, but the principle is the same for all codepoints).
FILE: m_pd.c
ugly stinky nasty hack to get sprintf("%C") to output a UTF-8 encoded string from an unicode codepoint int, as called by canvas_key() in g_editor.c
FILE: g_rtext.c
as values of the 'keynum' parameter. should also be safe for any 8-bit fixed-width encoding.
FILE: pd.tk
Attached is a screenshot and a test patch. UTF-8 input from the keyboard works with the test patch, and gets carried through properly to the .pd file (and back on load).
I'd like to get symbol atoms working too (haven't tried yet), but there are still some nasty buglets with comments and message boxes, mostly that editing any multibyte characters is very tricky: looks like the Tk point (cursor) and selection are expressed in characters, and Pd's C side is still thinking in bytes, though I'm totally ignorant of where or how that can be changed. A non-critical buglet with the same cause (probably) is that the C side is computing the required width for message boxes based on byte lengths, not character lengths, so message boxes containing multibyte characters look too wide. I could live with that, but the editing thing is a real pain...
I've attached a diff of my changes against branches/pd-devel/0.41.4/src (please excuse commented-out debugging code), in case anyone wants to try this stuff out. Since it's not working, I'm reluctant to check anything into the pd-devel/0.41.4 branch yet -- should I branch again for a work in progress, or do we just pass diffs around for now?
marmosets, Bryan
On 2009-02-12 06:24:44, Hans-Christoph Steiner hans@eds.org appears to have written:
On Feb 11, 2009, at 6:34 AM, Bryan Jurish wrote:
On 2009-02-11 03:04:34, Hans-Christoph Steiner hans@eds.org appears to
This is something that I would really like to have working properly in Pd-devel. Tcl/Tk is natively UTF-8, so it seems that we should support UTF-8 in Pd. Anyone feel like trying to fix it?
This is good news! While the C changes aren't dead simple, they are
not bad. I think they could be slightly simplified. One thing that
would make it much easier to read the diff is if you create it without
whitespace changes. So like this:
svn diff -x -w
As for the Tcl changes, I think we can include those now in Pd-devel,
as long they can work ok with unchanged C code. Then once the new Tcl
GUI is included we can refactor the C side of things with things like
this. One other thing, it seems that the ASCII char are handled
differently than the UTF-8 chars in g_rtext.c, I think you could use
instead wcswidth(), mbstowcs() or other UTF-8 functions as described
in the UTF-8 FAQ
http://www.cl.cam.ac.uk/~mgk25/unicode.html#mod
.hc
On Feb 17, 2009, at 5:53 PM, Bryan Jurish wrote:
morning Hans, morning list,
So I've tried to get the pd-devel 0.41.4 branch to use UTF-8 across
the board. The TK side was easy (as Hans predicted); really just a call
to {fconfigure} in ::pd_connect::configure_socket. I also set the output encoding to UTF-8 on Tk's stdout and stderr, for debugging purposes; it's probably wisest to leave those encodings at the default (user's current locale LC_CTYPE) for a release-like version.The C side is much hairier. I think I've got things basically working (at least for message boxes and comments), but it has so far required changes in:
FILE: g_editor.c
- changed handling of <Key> events as passed to the C side to generate
UTF-8 symbol-strings rather than single-byte stringlets.
- currently use sprintf("%C") to get the UTF-8 string for the
codepoint passed from Tk; a safer (and not too hard) way would be to pass the actual UTF-8 string from Tk and just copy that: this would avoid the m_pd.c hacks forcing LC_CTYPE=en_US.UTF-8 (see below). Another option would be actually just writing (or borrowing) the code to generate
UTF-8 strings from Unicode codepoints. It's pretty simple stuff; I've still got the guts of it somewhere (only written for latin-1 so far, but the principle is the same for all codepoints).FILE: m_pd.c
- added calls to setlocale() to set LC_CTYPE to en_US.UTF-8; this is
an ugly stinky nasty hack to get sprintf("%C") to output a UTF-8 encoded string from an unicode codepoint int, as called by canvas_key() in g_editor.c
FILE: g_rtext.c
- added an 'else if' clause in rtext_key() to handle unicode
codepoints as values of the 'keynum' parameter. should also be safe for any 8- bit fixed-width encoding.
FILE: pd.tk
- set system encoding, also output encoding for stdout, stderr to
UTF-8
Attached is a screenshot and a test patch. UTF-8 input from the keyboard works with the test patch, and gets carried through
properly to the .pd file (and back on load).I'd like to get symbol atoms working too (haven't tried yet), but
there are still some nasty buglets with comments and message boxes, mostly that editing any multibyte characters is very tricky: looks like the
Tk point (cursor) and selection are expressed in characters, and Pd's C side is still thinking in bytes, though I'm totally ignorant of
where or how that can be changed. A non-critical buglet with the same cause (probably) is that the C side is computing the required width for message boxes based on byte lengths, not character lengths, so message boxes containing multibyte characters look too wide. I could live
with that, but the editing thing is a real pain...I've attached a diff of my changes against branches/pd-devel/0.41.4/ src (please excuse commented-out debugging code), in case anyone wants to try this stuff out. Since it's not working, I'm reluctant to check anything into the pd-devel/0.41.4 branch yet -- should I branch again for a work in progress, or do we just pass diffs around for now?
marmosets, Bryan
On 2009-02-12 06:24:44, Hans-Christoph Steiner hans@eds.org
appears to have written:On Feb 11, 2009, at 6:34 AM, Bryan Jurish wrote:
On 2009-02-11 03:04:34, Hans-Christoph Steiner hans@eds.org
appears toThis is something that I would really like to have working
properly in Pd-devel. Tcl/Tk is natively UTF-8, so it seems that we should
support UTF-8 in Pd. Anyone feel like trying to fix it?-- Bryan Jurish "There is *always* one more
bug." jurish@ling.uni-potsdam.de -Lubarsky's Law of Cybernetic
Entomology<test-utf8.pd><test-utf8.png>Index: m_pd.c
--- m_pd.c (revision 10779) +++ m_pd.c (working copy) @@ -295,6 +295,18 @@ void glob_init(void); void garray_init(void);
+/*--BEGIN moo--*/ +#include <locale.h> +void locale_init(void) {
- setlocale(LC_ALL,"");
- setlocale(LC_NUMERIC,"C");
- setlocale(LC_CTYPE,"en_US.UTF-8");
- /*
- printf("moo: locale=%s\n", setlocale(LC_ALL,NULL));
- printf("moo: LC_CTYPE=%s\n", setlocale(LC_CTYPE,NULL));
- */
+}
void pd_init(void) { mess_init(); @@ -302,5 +314,5 @@ conf_init(); glob_init(); garray_init();
- locale_init(); /*-- moo --*/
}
Index: g_editor.c
--- g_editor.c (revision 10779) +++ g_editor.c (working copy) @@ -1468,9 +1468,16 @@ gotkeysym = av[1].a_w.w_symbol; else if (av[1].a_type == A_FLOAT) {
- /*-- moo: old char buf[3];
sprintf(buf, "%c", (int)(av[1].a_w.w_float));
- sprintf(buf, "%c", (int)(av[1].a_w.w_float)); gotkeysym = gensym(buf);
- --*/
char buf[8];
- sprintf(buf, "%C", (int)(av[1].a_w.w_float));
- /*printf("moo: charcode %%d=%d, %%c=%c, %%C=%C\n", (int)
(av[1].a_w.w_float), (int)(av[1].a_w.w_float), (int) (av[1].a_w.w_float));*/
- /*printf("moo: buf='%s'\n", buf);*/
} else gotkeysym = gensym("?"); fflag = (av[0].a_type == A_FLOAT ? av[0].a_w.w_float : 0);gotkeysym = gensym(buf);
Index: pd_connect.tcl
--- pd_connect.tcl (revision 10779) +++ pd_connect.tcl (working copy) @@ -11,6 +11,10 @@
proc ::pd_connect::configure_socket {sock} { fconfigure $sock -blocking 0 -buffering line +##--moo
- fconfigure $sock -encoding utf-8
+# puts "moo: fconfigure socket -encoding = [fconfigure $sock - encoding]" +##--/moo fileevent $sock readable {::pd_connect::pd_readsocket ""} }
@@ -50,6 +54,11 @@ proc ::pd_connect::pdsend {message} { variable pd_socket append message ; +##--moo +# if {[lindex $message 1] != {motion}} { +# puts "moo: pdsend enc={[fconfigure $pd_socket -encoding]}
msg={$message}" +# } +##--/moo if {[catch {puts $pd_socket $message} errorname]} { puts stderr "pdsend errorname: >>$errorname<<" error "Not connected to 'pd' process" @@ -64,6 +73,9 @@ exit } append cmd_from_pd [read $pd_socket] +##--moo +# puts "moo: pd_readsocket enc={[fconfigure $pd_socket - encoding]} cmd_from_pd={$cmd_from_pd}" +##--/moo if {[catch {uplevel #0 $cmd_from_pd} errorname]} { global errorInfo puts stderr "errorname: >>$errorname<<" Index: pd.tk =================================================================== --- pd.tk (revision 10779) +++ pd.tk (working copy) @@ -152,6 +152,15 @@ # [string range
# [registry get {HKEY_CURRENT_USER\Control Panel\International}
sLanguage] 0 1] ] #}
+##--moo
- encoding system utf-8
- fconfigure stderr -encoding utf-8
- fconfigure stdout -encoding utf-8
- puts "moo: encoding system = [encoding system]"
- puts "moo: encoding stderr = [fconfigure stderr -encoding]"
- puts "moo: encoding stdout = [fconfigure stdout -encoding]"
+##--/moo }
#
Index: g_rtext.c
--- g_rtext.c (revision 10779) +++ g_rtext.c (working copy) @@ -447,8 +447,13 @@
/* at Guenter's suggestion, use 'n>31' to test wither a character
might be printable in whatever 8-bit character set we find ourselves. */ +/*-- moo: ... but test with "<" rather than "!=" in order to
accomodate unicode codepoints for n
(which we get since Tk is sending the "%A" substitution for
bind <Key>",
effectively reducing the coverage of this clause to 7 bits;
case n>127
is covered by the next clause.
- --*/
if (n == '\n' || (n > 31 && n != 127))
if (n == '\n' || (n > 31 /*&& n != 127*/ && n < 127)) /*--
moo --*/ { newsize = x->x_bufsize+1; x->x_buf = resizebytes(x->x_buf, x->x_bufsize, newsize); @@ -457,7 +462,21 @@ x->x_buf[x->x_selstart] = n; x->x_bufsize = newsize; x->x_selstart = x->x_selstart + 1;
- }
- /*--moo: check for 8-bit or unicode codepoints, assuming "keysym"
is a correctly encoded (UTF-8) string--*/
- else if (n > 127) {
int clen = strlen(keysym->s_name);
newsize = x->x_bufsize + clen;
x->x_buf = resizebytes(x->x_buf, x->x_bufsize, newsize);
for (i = x->x_bufsize; i > x->x_selstart; i--)
x->x_buf[i] = x->x_buf[i-1];
x->x_bufsize = newsize;
/*-- insert keysym->s_name, rather than decoding the unicode
value here --*/
//strncpy(x->x_buf+x->x_selstart, keysym->s_name, clen);
strcpy(x->x_buf+x->x_selstart, keysym->s_name);
x->x_selstart = x->x_selstart + clen; }
- /*--/moo--*/ x->x_selend = x->x_selstart; x->x_glist->gl_editor->e_textdirty = 1; }
'You people have such restrictive dress for women,’ she said, hobbling
away in three inch heels and panty hose to finish out another pink-
collar temp pool day. - “Hijab Scene #2", by Mohja Kahf
moin Hans, moin list,
On 2009-02-19 18:43:49, Hans-Christoph Steiner hans@eds.org appears to have written:
This is good news! While the C changes aren't dead simple, they are not bad. I think they could be slightly simplified. One thing that would make it much easier to read the diff is if you create it without whitespace changes. So like this:
svn diff -x -w
oops, sorry... duly noted for future diffs ... I also set my emacs' tcl-indent-width to 8 ... sorry sorry sorry ...
As for the Tcl changes, I think we can include those now in Pd-devel, as long they can work ok with unchanged C code.
Done.
Then once the new Tcl GUI is included we can refactor the C side of things with things like this.
One other thing, it seems that the ASCII char are handled differently than the UTF-8 chars in g_rtext.c, I think you could use instead wcswidth(), mbstowcs() or other UTF-8 functions as described in the UTF-8 FAQ
Certainly, but (A) we already have the UTF-8 byte string in keysym, and we need to append that whole string to the buffer anyways, and (B) using wcswidth() & co requires forcing the locale to have a UTF-8 LC_CTYPE. I know I did this in m_pd.c, but I think that was a HACK and that using locale functions here is the Wrong Way To Do It, because it's dangerous, unportable, and slow (warning: rant follows):
__dangerous__: setting the locale is global for all threads of a process; in forcing the locale, we could conceivably mess with desired behavior elsewhere (e.g. in externals).
__unportable__: we don't even know if all users' machines *have* a UTF-8 locale installed, and even if they do, we don't know what it's called. If we don't force the encoding, we're stuck with either "C" (e.g. ASCII; what we've got now in Pd-vanilla), or whatever the user is currently employing (after setlocale(LC_ALL,"")), which makes patches' appearance dependent on the user's encoding (e.g. what we've got now in Pd-vanilla), and doesn't even work in the case of variable-length encodings such as UTF-8.
__slow__: many locale-based conversion functions are known to be pretty darned slow. if we assume we're always dealing with (valid) UTF-8, we can speed things up considerably. going straight to wchar_t is another option, but would require many more changes on the C side, likely break the C API, and wouldn't solve the locale-dependency of patches' appearances, which I think is a really good argument for UTF-8.
(rant finished now, sorry)
That said, a faster implementation would probably result from mixing (something like) wcswidth() and strncpy(...,keysym). Functions like wcswidth() and mbstowcs() are pretty easy to cook up if we assume wchar_t is UCS-4 and the multibyte encoding is UTF-8. There are a number of libraries and code snippets floating about in the net making just such assumptions. In this context: are there any licensing restrictions on code included in pd-devel? So far, I've found one useful-looking (.c,.h) pair in the public domain, as well as some LGPL code from gnulib, which could be linked in statically. There's also code from the Unicode Consortium themselves, but it's pretty monstrous (read "pedantic") and limited to string-to-string conversions.
marmosets, Bryan
On Feb 17, 2009, at 5:53 PM, Bryan Jurish wrote:
So I've tried to get the pd-devel 0.41.4 branch to use UTF-8 across the board. The TK side was easy (as Hans predicted);
[snip]
The C side is much hairier.
[snip]
On Feb 19, 2009, at 4:13 PM, Bryan Jurish wrote:
moin Hans, moin list,
On 2009-02-19 18:43:49, Hans-Christoph Steiner hans@eds.org
appears to have written:This is good news! While the C changes aren't dead simple, they
are not bad. I think they could be slightly simplified. One thing that
would make it much easier to read the diff is if you create it without whitespace changes. So like this:svn diff -x -w
oops, sorry... duly noted for future diffs ... I also set my emacs' tcl-indent-width to 8 ... sorry sorry sorry ...
As for the Tcl changes, I think we can include those now in Pd- devel, as long they can work ok with unchanged C code.
Done.
Then once the new Tcl GUI is included we can refactor the C side of things with things like
this.One other thing, it seems that the ASCII char are handled differently than the UTF-8 chars in g_rtext.c, I think you could use instead wcswidth(), mbstowcs() or other UTF-8 functions as described in the UTF-8 FAQ
Certainly, but (A) we already have the UTF-8 byte string in keysym,
and we need to append that whole string to the buffer anyways, and (B) using wcswidth() & co requires forcing the locale to have a UTF-8 LC_CTYPE. I know I did this in m_pd.c, but I think that was a HACK
and that using locale functions here is the Wrong Way To Do It, because
it's dangerous, unportable, and slow (warning: rant follows):__dangerous__: setting the locale is global for all threads of a process; in forcing the locale, we could conceivably mess with
desired behavior elsewhere (e.g. in externals).__unportable__: we don't even know if all users' machines *have* a
UTF-8 locale installed, and even if they do, we don't know what it's called. If we don't force the encoding, we're stuck with either "C" (e.g.
ASCII; what we've got now in Pd-vanilla), or whatever the user is currently employing (after setlocale(LC_ALL,"")), which makes patches'
appearance dependent on the user's encoding (e.g. what we've got now in Pd-vanilla), and doesn't even work in the case of variable-length encodings such as UTF-8.__slow__: many locale-based conversion functions are known to be
pretty darned slow. if we assume we're always dealing with (valid) UTF-8, we can speed things up considerably. going straight to wchar_t is
another option, but would require many more changes on the C side, likely
break the C API, and wouldn't solve the locale-dependency of patches' appearances, which I think is a really good argument for UTF-8.
Isn't it pretty safe to assume these days that UTF-8 is supported?
One thing I just found out is that Windows uses a 2-byte char natively
(UCS-2?), I think Mac OS X uses UTF-8 natively. I think that most
Linux tools should work with UTF-8 too, especially since it can work
as ASCII.
So you think we can have full UTF-8 support without using those
functions?
(rant finished now, sorry)
That said, a faster implementation would probably result from mixing (something like) wcswidth() and strncpy(...,keysym). Functions like wcswidth() and mbstowcs() are pretty easy to cook up if we assume wchar_t is UCS-4 and the multibyte encoding is UTF-8.
It seems to me that the wcswidth() would be used for measuring the
length of the text for display in boxes. I suppose strlen() could
still be used for allocating and freeing memory, but I think that we
should aim for clean code. If you think the current way in your diff
is the best, that's fine by me.
There are a number of libraries and code snippets floating about in the net making just such assumptions. In this context: are there any licensing restrictions on code included in pd-devel? So far, I've found one useful-looking (.c,.h) pair in the public domain, as well as some LGPL code from gnulib, which could be linked in statically. There's also code from the Unicode Consortium themselves, but it's pretty monstrous (read "pedantic") and limited to string-to-string conversions.
Well, Pd-vanilla is BSD licensed, and Pd-extended is GPL'ed. For this
stage of Pd-devel, it would be good to keep it to something that can
be BSD licensed.
.hc
marmosets, Bryan
On Feb 17, 2009, at 5:53 PM, Bryan Jurish wrote:
So I've tried to get the pd-devel 0.41.4 branch to use UTF-8
across the board. The TK side was easy (as Hans predicted);[snip]
The C side is much hairier.
[snip]
-- Bryan Jurish "There is *always* one more
bug." jurish@ling.uni-potsdam.de -Lubarsky's Law of Cybernetic
Entomology
Access to computers should be unlimited and total. - the hacker ethic
moin all,
On 2009-02-20 06:20:18, Hans-Christoph Steiner hans@eds.org appears to have written:
On Feb 19, 2009, at 4:13 PM, Bryan Jurish wrote:
moin Hans, moin list, On 2009-02-19 18:43:49, Hans-Christoph Steiner hans@eds.org appears to have written:
One other thing, it seems that the ASCII char are handled differently than the UTF-8 chars in g_rtext.c, I think you could use instead wcswidth(), mbstowcs() or other UTF-8 functions as described in the UTF-8 FAQ
Certainly, but (A) we already have the UTF-8 byte string in keysym, and we need to append that whole string to the buffer anyways, and (B) using wcswidth() & co requires forcing the locale to have a UTF-8 LC_CTYPE. I know I did this in m_pd.c, but I think that was a HACK and that using locale functions here is the Wrong Way To Do It, because it's dangerous, unportable, and slow (warning: rant follows):
__dangerous__: setting the locale is global for all threads of a process; in forcing the locale, we could conceivably mess with desired behavior elsewhere (e.g. in externals).
__unportable__: we don't even know if all users' machines *have* a UTF-8 locale installed, and even if they do, we don't know what it's called. If we don't force the encoding, we're stuck with either "C" (e.g. ASCII; what we've got now in Pd-vanilla), or whatever the user is currently employing (after setlocale(LC_ALL,"")), which makes patches' appearance dependent on the user's encoding (e.g. what we've got now in Pd-vanilla), and doesn't even work in the case of variable-length encodings such as UTF-8.
__slow__: many locale-based conversion functions are known to be pretty darned slow. if we assume we're always dealing with (valid) UTF-8, we can speed things up considerably. going straight to wchar_t is another option, but would require many more changes on the C side, likely break the C API, and wouldn't solve the locale-dependency of patches' appearances, which I think is a really good argument for UTF-8.
Isn't it pretty safe to assume these days that UTF-8 is supported?
Yes, but under what name? Also, I believe the relevant locale variable (LC_CTYPE) requires a language component prior to the charmap, and we cannot guarantee that e.g. "en_US" is installed everywhere. The only locale guaranteed to be installed everywhere is "C", and that determines language and charmap simultaneously.
Also, the "dangerous" property is impossible to get around, unless maybe we treat the locale like a stack and only force LC_CTYPE="(whatever).UTF-8" in code where we know we want/need UTF-8. I suspect this might slow things down enormously (although I haven't tested exactly what kind of overhead is involved). Adding threads to the picture means that we would have to add locking on LC_CTYPE (or similar) and that would only work if hypothetical locale-sensitive externals respected the same locks. All in all more trouble than it's worth, IM(ns)HO.
One thing I just found out is that Windows uses a 2-byte char natively (UCS-2?),
Probably.
I think Mac OS X uses UTF-8 natively.
... but not for wchar_t (which would be superfluous if sizeof(wchar_t)==1) !
I think that most Linux tools should work with UTF-8 too, especially since it can work as ASCII.
Yes, but "working with" UTF-8 is by no means synonymous with supporting a particular (and known) value of LC_CTYPE which happens to use UTF-8 as its charmap. Most text-processing tools "work with" UTF-8 because they can get away with just churning bytes -- this is not the case for Pd (which counts characters to move the selection, edit buffers, determine box widths, and maybe more)...
So you think we can have full UTF-8 support without using those functions?
In a word: yes.
Specifically, I think we can have full UTF-8 support without using those functions *as provided by the C99 locale API*. That amounts to rolling our own versions of the same and/or similar functionality. In particular, the (utf8.c,utf8.h) code by Jeff Bezanson (see http://www.cprogramming.com/tutorial/unicode.html) has some attractive utilities for wrapping typical string-processing code (in particular, u8_inc() and u8_dec() for adapting old byte-string processing code using i++ and i--, respectively), in addition to wrappers for the usual locale-style functionality:
wcswidth() --> (trivial) (I've written the code) mbstowcs() --> u8_toucs() (I've actually got a version of this too)
Other of Bezanson's utilities (isutf8(), u8_offset(), u8_charnum(), u8_nextchar()) are also potentially useful for adapting the C side, and in some cases, I'm not even sure how to wrap them with the C locale functions without converting the whole UTF-8 string to wchar_t, which I think we can agree we do not want to do. Assumedly, Bezanson's code (public domain) code is safe for integration with anything, so I'll use that for now, if no one objects.
That said, a faster implementation would probably result from mixing (something like) wcswidth() and strncpy(...,keysym->s_name). Functions like wcswidth() and mbstowcs() are pretty easy to cook up if we assume wchar_t is UCS-4 and the multibyte encoding is UTF-8.
It seems to me that the wcswidth() would be used for measuring the length of the text for display in boxes. I suppose strlen() could still be used for allocating and freeing memory, but I think that we should aim for clean code. If you think the current way in your diff is the best, that's fine by me.
Yep. I suspect we might not get around adding a "x_bufchars" field (or similar) to t_rtext (struct _rtext), to cache the length of the buffer in logical characters, rather than bytes. We can always compute the former in O(n) by iterating over the buffer, but I think it will be needed to often for that.
To clarify: (1) I think my use of locale-dependent functions (sprintf("%C",...)) to prepare a string for gensym() is sick bad ugly and wrong, only a temporary solution which should be replaced by locale-independent, UTF-8 specific code analagous to wctomb(), such as Bezanson's u8_wc_toutf8() (the "_wc_" infix of which implies wchar_t, but the code actually assumes that the "wc" parameter is UCS-4; we cannot guarantee this for system-dependent wctomb() implementations; I just used it because I know glibc appears to use UCS-4 as wchar_t, and I wanted to get a clear picture of where the (other) problems lay. The Tk bind() manpage says that the "%A" substitution (which Pd is getting as 'keynum') is replaced by "the UNICODE character corresponding to the event", but afaik the C99 standard does not require that wchar_t contain unicode values: it can be any libc-dependent wide character fixed-width encoding... chalk that one up under "unportable")
(2) I think using strncpy(buf,keysym->s_name) is safe and portable and unlikely to cause any difficulties, although it might be prettier to replace it with (another) call to wctomb(): that's just an aesthetic/efficiency issue, as far as I'm concerned.
marmosets, Bryan