• Ada 202x; 2022; and 2012 and Unicode package (UTF-nn encodingshandling)

    From Nicolas Paul Colin de Glocester@Spamassassin@irrt.De to comp.lang.ada,fr.comp.lang.ada on Sun Aug 31 19:39:56 2025
    From Newsgroup: comp.lang.ada

    This message is in MIME format. The first part should be readable text,
    while the remaining parts are likely unreadable without MIME-aware tools.

    --708268602-391362420-1756662000=:706029
    Content-Type: text/plain; format=flowed; charset=UTF-8 Content-Transfer-Encoding: QUOTED-PRINTABLE

    Dear Adaists,

    Bj=C3=B6rn Persson wrote during 2006: $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
    $"Gnat's approach to character encodings is$
    $amazingly faulty." $ $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$

    Bj=C3=B6rn Persson wrote during 2006: $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
    $"> System.WCh_Cnv confound JIS character code with Unicode, it makes $
    troubles. Wide_Text_IO (and -gnatWs, -gantWe) are useless in fact, $ because there is no what uses JIS character code as it is, conversion$
    is needed after all. $
    $ $
    $I haven't used that package myself so I don't know how it works, but I $ $won't be surprised if it's buggy. In my experience, Adacore's handling $
    $of character encodings is rather unimpressive." $ $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$

    Deadly Head wrote during 2010: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    %"This is a pretty big deal to me. For a long time I've been a bit... % %frustrated? ... by the fact that the Ada standard specifically gives %
    %us Wide_ and Wide_Wide_Characters and their associated strings, but % %actually _using_ them seemed pretty much worthless. I mean, if you %
    %can't actually _talk_ with them to a modern system (UTF-8 or UTF-16 % %encoding seems to be pretty much the way it goes), what's the point in%
    %using them? %
    % %
    %So I'm pretty happy with using either the WCEM=3D8 or -gnatW8 methods of% %setting the encoding to get UTF-8 input and output. What I'm % %wondering now is can I get other UTF outputs to work? %
    % %
    %I actually have the peculiar case of dealing with UTF-32 encoded % %files, which need to be translated to UTF-8 for editing, and back to % %UTF-32 for machine-use again. It seems that it would be pretty % %straight-forward to just pull the file in with a straight % %Wide_Wide_Text_IO.Open/Get_Line system, then output via % %Wide_Wide_Text_IO.Put on a file where Form =3D> "WCEM=3D8". So far, =
    %
    %though, I'm having trouble [. . .]" % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

    Ludovic Brenta wrote during 2014: |-------------------------------------------------------------------------| |"As for the text that your program must process, that's really up to you.| |Ada 95 added the Wide_Character and Wide_String to help you use 16-bit | |characters (not exactly UTF-16, rather supporting only the first plane |
    |of the Unicode character set); Ada 2005 added Wide_Wide_Character for | |32-bit characters (i.e. UTF-32 encoding) The String Encoding package is | |there to help you transcode text between 8-bit Latin_1, UTF-8, proper | |UTF-16 and UTF-32. The new packages are there to help you but they | |don't do anything that wasn't possible in previous versions of Ada | |(i.e. you could reimplement them in Ada 95 if you so wished)." | |-------------------------------------------------------------------------|

    Yannick Duch=C3=AAne (Hibou57) wrote during 2010: ###########################################################################= ###
    #"Extract from the thread =E2=80=9CS-expression I/O in Ada=E2=80=9D. Subtop=
    ic moved in a #
    #separate thread for clarity. =
    #
    # =
    #
    #Le Wed, 18 Aug 2010 15:16:50 +0200, J-P. Rosen <rosen@adalog.fr> a =C3=A9c= rit: #
    Slightly OT, but you (and others) might be interested to know that Ada =
    #
    2012 will include string encoding packages to the various UTF-X =
    #
    encodings. These will be (are?) provided very soon by GNAT. =
    #
    =
    #
    See AI05-137-2 =
    #
    (http://www.ada-auth.org/cgi-bin/cvsweb.cgi/ai05s/ai05-0137-2.txt?rev=3D=
    1.2)#
    # =
    #
    #Time for my stupid question of the day :) =
    #
    # =
    #
    #I've noticed this introduction in the last amendment, because Unicode has =
    #
    #always been an issue/matter for me (actually use my own). =
    #
    # =
    #
    #I could not avoid two questions: why no UTF-32 ? (this would not be an =
    #
    #implementation nightmare) and why BOM handled for each string while BOM is=
    #
    #to be used at stream/file level ? (see XML or HTML files for example). Or =
    #
    #are these strings supposed to hold the whole content of a file/stream ? =
    #
    # =
    #
    #Quote: =
    #
    #http://www.unicode.org/faq/utf_bom.html =
    #
    A: A byte order mark (BOM) consists of the character code U+FEFF at the =
    #
    beginning of a data stream =
    #
    # =
    #
    #This is a FAQ at Unicode.org; but all references (Unicode PDF files, XML =
    #
    #reference, HTTML reference) all says the same. =
    #
    # =
    #
    #This matter, because the code point U+FEFF can stands for two different =
    #
    #things: Zero Width No Break Space or encoding Byte Order Mark. The only =
    #
    #way to distinguish both usage, is where-it-appears. =
    #
    # =
    #
    #If it appears as the first code point of a stream, this is a BOM =
    #
    #(heuristics may be applied to automatically switch encoding with an =
    #
    #analysis of the first byte of a stream, this is what I do) ; if this =
    #
    #appears any where else in a stream, this is a character code point." =
    # ###########################################################################= ###

    Contrarily to =E2=80=9CAda 2012 will include string encoding packages to th= e=20
    various UTF-X encodings=E2=80=9D, a standard Ada package does not support U= TF-32!=20
    Even Ada 2022 lacks!

    "Table 23-6. Unicode Encoding Scheme Signatures
    Encoding Scheme=09Signature
    UTF-8=09EF BB BF
    UTF-16 Big-endian=09FE FF
    UTF-16 Little-endian=09FF FE
    UTF-32 Big-endian=0900 00 FE FF
    UTF-32 Little-endian=09FF FE 00 00"
    says HTTPS://WWW.Unicode.org/versions/Unicode16.0.0/core-spec/chapter-23/#G19635

    iconv --list
    reports many kinds: "UCS-2, UCS-2BE, UCS-2LE, UCS-4, UCS-4BE, UCS-4LE,=20
    UCS2, UCS4," and "UNICODE, UNICODEBIG, UNICODELITTLE," and "UTF-7-IMAP,=20 UTF-7, UTF-8, UTF-16, UTF-16BE, UTF-16LE, UTF-32, UTF-32BE, UTF-32LE,=20
    UTF7, UTF8, UTF16, UTF16BE, UTF16LE, UTF32, UTF32BE, UTF32LE".

    "package Ada.Strings.UTF_Encoding
    with Pure is
    4/3
    -- Declarations common to the string encoding packages
    type Encoding_Scheme is (UTF_8, UTF_16BE, UTF_16LE);
    5/3
    subtype UTF_String is String;
    6/3
    subtype UTF_8_String is String;
    7/3
    subtype UTF_16_Wide_String is Wide_String;
    8/3
    Encoding_Error : exception;
    9/3
    BOM_8 : constant UTF_8_String :=3D
    Character'Val(16#EF#) &
    Character'Val(16#BB#) &
    Character'Val(16#BF#);
    10/3
    BOM_16BE : constant UTF_String :=3D
    Character'Val(16#FE#) &
    Character'Val(16#FF#);
    11/3
    BOM_16LE : constant UTF_String :=3D
    Character'Val(16#FF#) &
    Character'Val(16#FE#);
    12/3
    BOM_16 : constant UTF_16_Wide_String :=3D
    (1 =3D> Wide_Character'Val(16#FEFF#));"
    says
    HTTPS://AdaIC.org/resources/add_content/standards/22rm/html/RM-A-4-11.html without UTF-32.

    John or Erich Rast wrote during 2014: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ^"there are plenty of converters between different Unicode versions^
    ^(UTF-8, UTF-16, UTF-32)." ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    Contrast with
    "package Ada.Strings.UTF_Encoding
    with Pure is
    4/3
    -- Declarations common to the string encoding packages
    type Encoding_Scheme is (UTF_8, UTF_16BE, UTF_16LE);
    [. . .]
    end Ada.Strings.UTF_Encoding;
    15/5
    package Ada.Strings.UTF_Encoding.Conversions
    with Pure is
    16/3
    -- Conversions between various encoding schemes
    function Convert (Item : UTF_String;
    Input_Scheme : Encoding_Scheme;
    Output_Scheme : Encoding_Scheme;
    Output_BOM : Boolean :=3D False) return UTF_String=
    ;"
    says
    HTTPS://AdaIC.org/resources/add_content/standards/22rm/html/RM-A-4-11.html

    "A full featured character encoding converter will have to provide the=20 following 13 encoding variants of Unicode and UCS:

    UCS-2, UCS-2BE, UCS-2LE, UCS-4, UCS-4LE, UCS-4BE, UTF-8, UTF-16, UTF-16BE,=
    =20
    UTF-16LE, UTF-32, UTF-32BE, UTF-32LE"
    says
    HTTPS://WWW.CL.Cam.ac.UK/~mgk25/unicode.html

    (The same webpage says:
    "The term UTF-32 was introduced in Unicode to describe a 4-byte encoding=20
    of the extended =E2=80=9C21-bit=E2=80=9D Unicode. UTF-32 is the exact same = thing as UCS-4,=20
    except that by definition UTF-32 is never used to represent characters=20
    above U-0010FFFF, while UCS-4 can cover all 2[**]31 code positions up to=20 U-7FFFFFFF."

    Contrast with:
    "UCS-4 stands for =E2=80=9CUniversal Character Set coded in 4 octets.=E2=80= =9D It is now=20
    treated simply as a synonym for UTF-32, and is considered the canonical=20
    form for representation of characters in 10646."
    says
    HTTPS://WWW.Unicode.org/versions/Unicode16.0.0/core-spec/appendix-c
    So much for standardisation!)

    Randy L. Brukardt wrote during 2017: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>=

    "In Ada, type Character =3D Latin-1 =3D first 255 code positions, 8-bit =
    >
    representation. Text_IO and type String are for Latin-1 strings. =

    =

    type Wide_Charater =3D BMP (Basic Multilingual Plane) =3D first 65535 code=
    >
    positions =3D UCS-2 =3D 16-bit representation. =
    >
    =

    type Wide_Wide_Character =3D all of Unicode =3D UCS-4 =3D 32-bit represent= ation. >
    =

    There is no native support in Ada for UTF-8 or UTF-16 strings. There is a =

    conversion package (Ada.Strings.Encoding) [which is nasty because it break=

    strong typing] which lets one use UTF-8 and UTF-16 strings with Text_IO an=

    Wide_Text_IO. But you have to know if you are reading in UTF-8 or Latin-1 =

    (there is no good way to tell between them in the general case). =

    =

    Windows uses a BOM character at the start of UTF-8 files to differentiate =

    (at least in programs like Notepad and the built-in edit control), but tha=

    is not recommended by Unicode. I think they would prefer a world where =

    Latin-1 had disappeared completely, but that of course is not the real =

    world." =

    =


    Luke A. Guest wrote during 2021: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    !"And this is there the Ada standard gets it wrong, in the encodings!
    !package re utf-8." ! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    Vadim Godunko wrote during 2021: <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< <"Ada doesn't have good Unicode support. :( So, you need to find suitable<
    <set of "workarounds"." < <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

    Randy L. Brukardt wrote during 2013: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    "Right. The proper thing to do (for Ada 2012) is to use > >Ada.Characters.Wide_Handling (or Wide_Wide_Handling) to do the case> >conversion, after converting the UTF-8 into a Wide_String (or > >Wide_Wide_String)." > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

    However, Dmitry A. Kazakov wrote during 2021: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    !"Never ever use !
    !Wide or Wide_Wide, they are useless."!
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    Vadim Godunko wrote during 2022: <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    <"I think ((Wide_)Wide_)(Character|String) is obsolete for modern <
    <systems and programming languages; more cleaner types and API is a < <requirement now. The only case when old character/string types is <
    <really makes value is low resources embedded systems; in other cases<
    <their use generates a lot of hidden issues, which is very hard to < <detect." < <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

    Maxim Reznik wrote during 2021: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
    \"You can use Wide_Wide_String and Unbounded_Wide_Wide_String type to\
    \process Unicode strings. But this is not very handy. I use the \ \Matreshka library for Unicode strings." \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\

    I do not find Matreshka to be handy. Cf. an ALIRE failure shown below.

    Dmitry A. Kazakov wrote during 2021: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!=
    !!
    !"On 2021-06-21 00:50, Jeffrey R. Carter wrote: =
    !
    On 6/20/21 8:47 PM, Dmitry A. Kazakov wrote: =
    !
    On 2021-06-20 20:21, Jeffrey R. Carter wrote: =
    !
    =
    !
    That ship has sailed. I would say that any use of String as Latin-1 is =
    !
    a mistake now because most of the libraries would use UTF-8 encoding =
    !
    instead of Latin-1. =
    !
    =
    !
    I have never subscribed to the illogic that if enough people make the =
    !
    same mistake, it ceases to be a mistake. =
    !
    ! =
    !
    !The mistake is on the Ada type system design side. People repurposed =
    !
    !Latin-1 strings for UTF-8 strings because there was no other feasible way.=
    "!
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!=
    !!

    Cf.
    "Why do people do this?!
    Honestly, I don't really know. This is one of those mysteries that might
    never get solved. Oh, there is one lead: it seems to be generated mostly (exclusively?) by Windows systems. Really, who would have thought?"
    says
    HTTPS://WWW.ueber.net/who/mjl/projects/bomstrip

    Cf.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~"For a long time, it was believed that Unicode could get by with 16 ~
    ~bits to represent the characters for all languages of the ~
    ~world. Originally, =E2=80=9CUnicode=E2=80=9D was defined as =E2=80=9C16 bi=
    t ~
    ~characters=E2=80=9D. History showed this was a bad idea, but it was believ= ed~
    ~to be true for long enough that many systems are stuck with 16 bit ~ ~characters; both Java and Windows, for example, deal in 16 bit ~ ~characters." ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    says
    HTTPS://EntropicThoughts.com/unicode-strings-in-ada-2012
    by Christoffer Stjernl=C3=B6f.

    Cf.
    "One hundred repetitions three nights a week for four years, thought=20
    Bernard Marx, who was a specialist on hypnop=C3=A6dia. Sixty-two thousand f= our=20
    hundred repetitions make one truth. Idiots!"
    says
    @book{Sixty-two-thousand-four-hundred-repetitions-make-one-truth-Idiots, author=3D{Aldous Huxley},
    title=3D{{Brave New World}},
    publisher=3D{Chatto \& Windus with T. and A. Constable with the University=
    =20
    Press Edinburgh},
    address=3D{London and Edinburgh},
    year=3D{1932}
    }

    Cf. publications by psychologists. E.g. Kimberlee Weaver; Stephen M.=20
    Garcia; Norbert Schwarz; and Dale T. Miller, "Inferring the Popularity of=
    =20
    an Opinion From Its Familiarity: A Repetitive Voice Can Sound Like a=20 Chorus", "Journal of Personality and Social Psychology", 92(5):821-833,=20 2007.

    Cf. "majority opinion turns out to be wrong with a fairly high frequency=20
    in science"
    says
    James Woodward and David Goodstein, =E2=80=9CConduct, Misconduct and the=20 Structure of Science,=E2=80=9D September=E2=80=93October, "American Scienti= st", 1996,=20
    479=E2=80=93490.

    Shark8 wrote during 2013: //////////////////////////////////////////////////////////////////////// /"UTF-16 is perhaps the worst possible encoding you can have for / /Unicode. With UTF-8 you don't need to worry about byte-order / /(everything's sequential) and with UTF-32 you don't need to decode the/ /information (each element *IS* a code-point)... but UTF-16 offers / /neither of these." / ////////////////////////////////////////////////////////////////////////

    Randy Brukardt wrote during 2023: ***************************************************************************= ***
    *"But my opinion is that Ada got strings completely wrong, and the best thi= ng*
    *to do with them is to completely nuke them and start over. [. . .]" =
    * ***************************************************************************= ***

    I have been given a dataset. These files are supposedly homogeneous UTF-8=
    =20
    XML files. Actually
    for data_file in *.xml ; do file $data_file | sed -e 's/^.*: //' ; done |=
    =20
    sort | uniq
    reports:
    "ASCII text, with CRLF line terminators
    Unicode text, UTF-8 text, with CRLF line terminators
    XML 1.0 document, Unicode text, UTF-8 (with BOM) text, with CRLF line=20 terminators".
    (If file does not call an example "XML 1.0 document, Unicode [. . .]"=20
    then such an example lacks a line with
    <?xml version=3D'1.0' encoding=3D'utf-8'?>
    but does consist of XML parts.)

    A valid letter in this language expressed in UTF-8 octets can have:
    1 octet (e.g. 16#41#);
    2 octets (e.g. 16#C3_BA#);
    or
    3 octets (e.g. 16#E1_BA_9B#).
    I do not believe that I am overlooking a 4-octet example . . . but what=20
    if?

    This is not a constrained computer. It will not run out of memory. It is=20
    not slow. Deadly Head needs UTF-32. I do not need UTF-32 or UCS-4 for this=
    =20
    application, but elegance might promote a uniform quantity of octets for=20
    all letters; and a polyglot user might try to insert some weird=20
    punctuation or whatever which I do not know or might copy and paste some=20 multilingual table from Unicode.org. I do not want
    "a lot of hidden issues, which is very hard to
    detect"
    as Vadim Godunko said. I do not want a crash, especially with some=20
    exception which is less informative than a Java exception. Granted, all=20 these already existing files are in UTF-8. But what if some future=20 application will need general UCS4?

    Sinc=C3=A8res salutations.



    Nicolas Paul Colin de Glocester

    cd Matreshka_league__ALIRE_failed_to_build_this

    /home/gloucester/coldstorage/software_installed/ALIRE/Matreshka_league__ALI= RE_failed_to_build_this=20
    $ alr get matreshka_league
    =E2=93=98 Running post_fetch actions for matreshka_league=3D21.0.0...
    [. . .]
    configure: creating source/league/matreshka-config.ads

    matreshka_league=3D21.0.0 successfully retrieved.
    Dependencies were solved as follows:

    + make 4.3.0 (new)


    /home/gloucester/coldstorage/software_installed/ALIRE/Matreshka_league__ALI= RE_failed_to_build_this=20
    $ cd matreshka_league_21.0.0_0c8f4d47

    /home/gloucester/coldstorage/software_installed/ALIRE/Matreshka_league__ALI= RE_failed_to_build_this/matreshka_league_21.0.0_0c8f4d47=20
    $ alr run
    =E2=93=98 Building matreshka_league/gnat/matreshka_league.gpr...
    Compile
    [Ada] xml-sax-simple_readers-scanner.adb
    [. . .]
    league-iris.adb:1476:36: warning: Is_Valid unimplemented [enabled by=20 default]
    [. . .]
    [Ada] matreshka-cldr-collation_rules_parser.adb matreshka-internals-utf16.ads:100:04: warning: pragma Pack for=20 "Utf16_String" ignored [-gnatwr]
    [. . .]
    [Ada] league-calendars-iso_8601.adb matreshka-cldr-collation_rules_parser.adb:186:30: warning: assignment to=20 pass-by-copy formal may have no effect [enabled by default] matreshka-cldr-collation_rules_parser.adb:186:30: warning: "raise"=20
    statement may result in abnormal return (RM 6.4.1(17)) [enabled by=20
    default]
    [. . .]
    [Ada] matreshka-atomics-generic_test_and_set__gcc__64.adb matreshka-atomics-counters__gcc.adb:50:14: warning: intrinsic binding type=
    =20
    mismatch on parameter 2 [enabled by default] matreshka-atomics-counters__gcc.adb:50:14: warning: profile of=20 "Sync_Add_And_Fetch_32" doesn't match the builtin it binds [enabled by=20 default]
    matreshka-atomics-counters__gcc.adb:54:13: warning: intrinsic binding type=
    =20
    mismatch on result [enabled by default] matreshka-atomics-counters__gcc.adb:54:13: warning: intrinsic binding type=
    =20
    mismatch on parameter 2 [enabled by default] matreshka-atomics-counters__gcc.adb:54:13: warning: profile of=20 "Sync_Sub_And_Fetch_32" doesn't match the builtin it binds [enabled by=20 default]
    matreshka-atomics-counters__gcc.adb:57:14: warning: intrinsic binding type=
    =20
    mismatch on parameter 2 [enabled by default] matreshka-atomics-counters__gcc.adb:57:14: warning: profile of=20 "Sync_Sub_And_Fetch_32" doesn't match the builtin it binds [enabled by=20 default]
    [. . .]
    league-locales.ads:46:12: warning: unit "League.Strings" is not referenced=
    =20
    [-gnatwu]

    compilation of matreshka-internals-unicode-ucd-properties.adb failed
    compilation of league-strings-cursors-grapheme_clusters.adb failed
    compilation of matreshka-internals-code_point_sets.adb failed
    compilation of league-character_sets.adb failed
    compilation of matreshka-internals-unicode-ucd-norms.ads failed
    compilation of matreshka-internals-unicode-ucd-core.ads failed

    gprbuild: *** compilation phase failed
    error: Command ["gprbuild", "-s", "-j0", "-p", "-P",=20 "/coldstorage/gloucester/software_installed/ALIRE/Matreshka_league__ALIRE_f= ailed_to_build_this/matreshka_league_21.0.0_0c8f4d47/gnat/matreshka_league.= gpr"]=20
    exited with code 4
    error: Build failed

    /home/gloucester/coldstorage/software_installed/ALIRE/Matreshka_league__ALI= RE_failed_to_build_this/matreshka_league_21.0.0_0c8f4d47=20
    $ date
    Tue Aug 26 12:03:12 CEST 2025
    --708268602-391362420-1756662000=:706029--
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kevin Chadwick@kc-usenet@chadwicks.me.uk to comp.lang.ada on Sun Aug 31 21:23:27 2025
    From Newsgroup: comp.lang.ada

    Most languages only support working in one encoding. Go UTF-8 and Dart
    Utf-16. Perhaps Ada was too ambitious but wide_wide worked for me when I
    needed it. Finalising support is a potential aim of the next standard.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Nicolas Paul Colin de Glocester@Spamassassin@irrt.De to comp.lang.ada,fr.comp.lang.ada on Sun Aug 31 23:27:49 2025
    From Newsgroup: comp.lang.ada

    Dear Mister Chadwick,

    Thanks for this contribution.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alex // nytpu@nytpu@example.invalid to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 10:01:34 2025
    From Newsgroup: comp.lang.ada

    I've written about this at length before because it's a major pain
    point; but I can't find any of my old writing on it so I've rewritten it
    here lol. I go into extremely verbose detail on all the recommendations
    and the issues at play below, but to summarize:
    - You really should use Unicode both in storage/interchange and internally
    - Use Wide_Wide_<> internally everywhere in your program
    - Use Ada's Streams facility to read/write external text as binary, transcoding it manually using UTF_Encoding (or custom implemented
    routines if you need non-Unicode encodings)
    - You can use Text_Streams to get a binary stream even from stdin/stdout/stderr, although with some annoying caveats regarding
    Text_IO adding spurious end-of-file newlines when writing
    - Be careful with string functions that inspect the contents of strings
    even for Wide_Wide_Strings, because Unicode can have tricky issues
    (basically, just only ever look for/split on/etc. hardcoded valid sequences/characters due to issues with multi-codepoint graphemes)

    ***

    Right off the bat, in modern code either on its own or interfacing with
    other modern code, you really should use Unicode, and really really
    should use UTF-8. If you use Latin-1 or Windows-1252 or some weird
    regional encoding everyone will hate you, and if you restrict inputs to
    7-bit ASCII everyone will hate you too lol. And people will get annoyed
    if you use UTF-16 or UTF-32 instead of UTF-8 as the interchange/storage
    format in a new program.

    But first, looking at how you deal with text internally with your
    program, you *really* have two options (technically there's more but the others are not good): storing UTF-8 with Strings (you have to use a
    String even for individual characters), or storing UTF-32 in Wide_Wide_String/Wide_Wide_Characters.

    When storing UTF-8 in a String (for good practice, use the Ada.Strings.UTF_Encoding.UTF_8_String subtype just to indicate that it
    is UTF-8 and not Latin-1), the main thing is you can't use or have to be
    very cautious (and really should just avoid if possible) using any of
    the built-in String/Unbounded_String utilities that inspect or
    manipulate the contents of text.

    With Wide_Wide_<>, you're technically wasting 11 out of every 32 bits of memory for alignment reasons---or 24 out of 32 bits with text that's
    mostly ASCII with only the occasional higher character---but eh, not
    that big a deal *on modern systems capable of running a modern hosted environment*. Note that there is zero chance in hell that UTF-32 will
    ever be adopted as an interchange or storage encoding (except in
    isolated singular corporate apps *maybe*), so UTF-32 being used should
    purely be an internal implementation detail: incoming text in whatever encoding gets converted to it and outgoing text will always get
    converted from it. And you should only convert at the I/O "boundary",
    don't have half of your program dealing with native string encoding and
    half dealing with Wide_Wide_<> (with the only exception being that if
    you don't need to look at the string's contents and are just passing it through, then you can and should avoid transcoding at all).

    I personally use Wide_Wide_<> for everything just because it's more
    convenient to have more useful built-in string functions, and it makes
    dealing with input/output encoding much easier later (detailed below).

    I would never use Wide_<> unless you're exclusively targeting Windows or something, because UTF-16 is just inconvenient and has none of the
    benefits of UTF-8 nor any of the benefits of UTF-32 and most of the
    downsides of both. Plus since Ada standardized wide characters so early there's additional fuckups relating to UCS-2---UTF-16 incompatibilities
    like Windows has[1] and you absolutely do not want to deal with that.

    I'm unfortunate enough to know most of the nuances of Unicode but I
    won't subject you to it, but a lot of the statements in your collection
    are a bit oversimplified (UCS-4 has a number of additional differences
    from UTF-32 regarding "valid encodings", namely that all valid Unicode codepoints (0x0--0x10FFFF inclusive) are allowed in UCS-4 but only
    Unicode scalar values (0x0--0xD7FF and 0xE000--0x10FFFF inclusive) are
    valid in UTF-32), and are missing some additional information: a key
    detail is that even with UTF-32 where each Unicode scalar value is held
    in one array element rather than being variable-width like UTF-8/UTF-16,
    you still can't treat them as arbitrary arrays like 7-bit ASCII because
    a grapheme can be made up of multiple Unicode scalar values. Even with
    ASCII characters there's the possibility of combining diacritics or such
    that would break if you split the string between them.

    Also, I just stumbled across Ada.Strings.Text_Buffers which seems to be
    new to Ada 2022, makes "string builder" stuff much more convenient
    because you can write text using any of Ada's string types and then get
    a string in whatever encoding you want (and with the correct
    system-specific line endings which is a whole 'nother issue with Ada
    strings) out of it instead of needing to fiddle with all that manually,
    maybe that'll be useful if you can use Ada 2022.

    ***

    Okay, so I've discussed the internal representation and issues with
    that, but now we get into input/output transcoding... this is just a nightmare in Ada, one almost decent solution but even it has caveats and
    bugs, uggh.

    In general, just the Text_IO packages will always transcode the input
    file to whatever format you're getting and transcode your given output
    to some other format, and it's annoying to configure what encoding is
    used at compile time[2] and impossible to change at runtime which makes
    the Text_IO packages just useless for non-Latin-1/ASCII IMO. Even if
    you get GNAT whipped into shape for your codebase's needs you're
    abandoning all portability should a hypothetical second Ada
    implementation that you might want to use arise.

    The only way to get full control of the input and output encodings is to
    use one of Ada's ways of performing binary I/O and then manually convert strings to binary yourself. I personally prefer using Streams over Sequential_IO/Direct_IO, using UTF_Encoding (or the new Text_Buffers) to convert to/from the specific format I want before reading or writing
    from the stream.

    There is one singular bug though: if you use Ada.Text_IO.Text_Streams to
    get a byte stream from an Text_IO output file (the only way to
    read/write binary data from stdin, stdout, and stderr at all), then
    after writing and the file is closed, an extra newline will always be
    added. The Ada standard requires that Text_IO always output a newline
    if the output didn't end with one, and the stream from text_streams
    completely bypasses all of the Text_IO package's bookkeeping, so from
    its perspective nothing was written to the file (let alone a newline) so
    it has to add a newline.[3] So you either just have to deal with output
    files having an empty trailing line or make sure to strip off the final newline from the text you're outputting.

    ***

    Sorry for it being so long, but that's the horror of working with text
    XD, particularly older things like Ada that didn't have the benefit of
    modern hindsight for how text encoding would end up and had to bolt on solutions afterwards that just doesn't work right. Although at least
    Ada is better than the unfixable un-work-aroundable C/C++ nightmare[4]
    or Windows or really any software created prior to Unicode 1.1 (1993).

    ~nytpu

    [1]: https://wtf-8.codeberg.page/#ill-formed-utf-16
    [2]: The problem is GNAT completely changes how the Text_IO packages
    behave with regards to text encoding through opaque methods. The
    encodings used by Text_IO are mostly (but not entirely) based off of the `-gnatW` flag, which is configuring the encoding of THE PROGRAM'S SOURCE
    CODE. Absolutely batshit they abused the source file encoding flag as
    the only way for the programmer to configure what encoding the program
    reads and writes, which is completely orthogonal to the source code.
    [3]: When I was more active on IRC, either Lucretia or Shark8 (who you
    both quoted) would whine about this every chance possible lol. It is
    extremely annoying even when you use Text_IO directly rather than
    through streams, because it's messing with my damn file even when I
    didn't ask it to.
    [4]: https://github.com/mpv-player/mpv/commit/1e70e82baa91
    --
    Alex // nytpu
    https://nytpu.com/ - gemini://nytpu.com/ - gopher://nytpu.com/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Nicolas Paul Colin de Glocester@Spamassassin@irrt.De to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 19:40:07 2025
    From Newsgroup: comp.lang.ada

    This message is in MIME format. The first part should be readable text,
    while the remaining parts are likely unreadable without MIME-aware tools.

    --708268602-835474360-1756834812=:2382941
    Content-Type: text/plain; format=flowed; charset=UTF-8 Content-Transfer-Encoding: QUOTED-PRINTABLE

    Alex // nytpu wrote during this decade, specifically today: |--------------------------------------------------------------------------=
    -|
    |"I can't find any of my old writing on it so I've rewritten it =
    |
    |here lol." =
    |
    |--------------------------------------------------------------------------=
    -|

    Dear Alex:

    A teammate had once solved a problem but he had forgotten how he
    solved it. So he has queried a search engine. So it showed him a
    webpage with a perfect solution --- a webpage written by him!

    I recommend searching for that old writing about Unicode: perhaps it
    has more details than this comp.lang.ada thread, or perhaps a
    perspective has been changed in an interesting way. Even if there is
    no difference, perhaps it is in a directory with other missing files
    which need to be backed up!

    |--------------------------------------------------------------------------=
    -|
    |"If you use Latin-1 or Windows-1252 or some weird =
    |
    |regional encoding everyone will hate you, and if you restrict inputs to =
    |
    |7-bit ASCII everyone will hate you too lol. And people will get annoyed =
    |
    |if you use UTF-16 or UTF-32 instead of UTF-8 as the interchange/storage =
    |
    |format in a new program." =
    |
    |--------------------------------------------------------------------------=
    -|

    I quote Usenet articles in a way which does not endear me to
    persons. Not everyone reacts in the same way. OC Systems asked me how
    do I draw those boxes.

    I advocate Ada which also does not endear me to persons.

    |--------------------------------------------------------------------------=
    -|
    |"[. . .] =
    |
    | =
    |
    |I personally use Wide_Wide_<> for everything just because it's more =
    |
    |convenient to have more useful built-in string functions, and it makes =
    |
    |dealing with input/output encoding much easier later (detailed below). =
    |
    | =
    |
    |[. . .] =
    |
    | =
    |
    |I'm unfortunate enough to know most of the nuances of Unicode but I =
    |
    |won't subject you to it, but a lot of the statements in your collection =
    |
    |are a bit oversimplified (UCS-4 has a number of additional differences =
    |
    |from UTF-32 regarding "valid encodings", [. . .] =
    |
    |[. . .]" =
    |
    |--------------------------------------------------------------------------=
    -|

    Thanks for this feedback and more will be as welcome as can be. I
    quoted examples of what I found in this newsgroup. This newsgroup used
    not have many statements with explicit references to "UTF-32" or
    "UTF32" or "UCS-4" which differ overwhelmingly from what I quoted
    during the previous week.

    |--------------------------------------------------------------------------=
    -|
    |"Also, I just stumbled across Ada.Strings.Text_Buffers which seems to be =
    |
    |new to Ada 2022, makes "string builder" stuff much more convenient =
    |
    |because you can write text using any of Ada's string types and then get =
    |
    |a string in whatever encoding you want [. . .] =
    |
    |[. . .]" =
    |
    |--------------------------------------------------------------------------=
    -|

    Package Ada.Strings.Text_Buffers does not support UCS-4.

    |--------------------------------------------------------------------------=
    -|
    |"Note that there is zero chance in hell that UTF-32 will ever be adopted a=
    s|
    |an interchange or storage encoding (except in isolated singular corporate =
    |
    |apps *maybe*), so UTF-32 being used should purely be an internal =
    |
    |implementation detail: incoming text in whatever encoding gets converted t=
    o|
    |it and outgoing text will always get converted from it." =
    |
    |--------------------------------------------------------------------------=
    -|

    One can know but what one can too optimistically know can be
    false. Character sets or encodings used to be subjects of unfulfilled expectations.

    I can say that for now, UTF-8 is enough for a particular application.

    Deadly Head did not have the same luck.

    |--------------------------------------------------------------------------=
    -|
    |"The encodings used by =
    |
    |Text_IO are mostly (but not entirely) based off of the `-gnatW` flag, whic=
    h|
    |is configuring the encoding of THE PROGRAM'S SOURCE CODE." =
    |
    |--------------------------------------------------------------------------=
    -|

    GNAT has many switches. It could easily gain more switches.

    Sinc=C3=A8res salutations.



    Nicolas Paul Colin de Glocester
    --708268602-835474360-1756834812=:2382941--
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Nicolas Paul Colin de Glocester@Spamassassin@irrt.De to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 19:42:50 2025
    From Newsgroup: comp.lang.ada

    The first endnote (i.e.
    [1]: https://wtf-8.codeberg.page/#ill-formed-utf-16
    ) in news:10974d1$jn0e$1@dont-email.me
    is not reproduced in
    HTTPS://nytpu.com/gemlog/2025-09-02
    I do not know if that is intentional. Thanks for saying "It has an amusing large collection of quotes" on that webpage.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 20:08:36 2025
    From Newsgroup: comp.lang.ada

    On 2025-09-02 18:01, Alex // nytpu wrote:
    I've written about this at length before because it's a major pain
    point; but I can't find any of my old writing on it so I've rewritten it here lol.
    The matter is quite straightforward:

    1. Never ever use Wide and Wide_Wide. There is a marginal case of
    Windows API where you need Wide_String for UTF-16 encoding. Otherwise,
    use cases are absent. No text processing algorithms require code point
    access.

    2. Use Character as octet. String as UTF-8 encoded.

    That is all.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 11:49:13 2025
    From Newsgroup: comp.lang.ada

    Nicolas Paul Colin de Glocester <Spamassassin@irrt.De> writes:
    Alex // nytpu wrote during this decade, specifically today:
    [...]
    |---------------------------------------------------------------------------| |"If you use Latin-1 or Windows-1252 or some weird |
    [snip]
    |format in a new program." | |---------------------------------------------------------------------------|

    I quote Usenet articles in a way which does not endear me to
    persons. Not everyone reacts in the same way. OC Systems asked me how
    do I draw those boxes.

    Why do you do that? It seems like a lot of effort to produce an
    annoying result.

    [...]
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alex // nytpu@nytpu@example.invalid to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 13:13:12 2025
    From Newsgroup: comp.lang.ada

    On 9/2/25 12:08 PM, Dmitry A. Kazakov wrote:
    The matter is quite straightforward:
    Objectively false, "text" is never actually straightforward despite what
    it seems like on a surface level :P
    1. Never ever use Wide and Wide_Wide. There is a marginal case of
    Windows API where you need Wide_String for UTF-16 encoding. Otherwise,
    use cases are absent. No text processing algorithms require code point access.
    Somewhat inclined to agree with Wide_<> but I don't see strong
    justification to *never* use Wide_Wide_<>, there's pretty substantial tradeoffs to both using UTF-32 and UTF-8 (in any programming language
    that supports both, but particularly with Ada's string situation) so unfortunately it ultimately falls on the programmer to understand and
    choose.
    2. Use Character as octet. String as UTF-8 encoded.
    Perfectly valid, explicitly mentioned as an option in my post. Maybe
    actually would be better for most applications because they wouldn't
    need to transcode it, I should've noted that more clearly in my original response. The only two issues: make sure to avoid the Latin-1 String
    routines unless you know you're doing is sound; and in older Ada
    versions I remember reading long debates about the String type may not
    be able to safely store UTF-8 on many compilers (of the era), but that
    issue was clarified by even Ada 95 IIRC.

    I just personally prefer Wide_Wide_<> to get its slightly more
    Unicode-aware string routines, but it's not the only (or even inherently
    the best) option.

    ~nytpu
    --
    Alex // nytpu
    https://nytpu.com/ - gemini://nytpu.com/ - gopher://nytpu.com/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alex // nytpu@nytpu@example.invalid to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 13:15:07 2025
    From Newsgroup: comp.lang.ada

    On 9/2/25 11:42 AM, Nicolas Paul Colin de Glocester wrote:
    The first endnote (i.e.
    [1]: https://wtf-8.codeberg.page/#ill-formed-utf-16
    ) in news:10974d1$jn0e$1@dont-email.me
    is not reproduced in
    HTTPS://nytpu.com/gemlog/2025-09-02
    I do not know if that is intentional.
    I just converted the footnote to an inline link since HTML supports it
    while plaintext posts don't.

    Thanks for saying "It has an amusing large collection of quotes" on that webpage.
    It is a very thorough collection, I liked it.

    (Also I didn't think to ask before posting your original message or my
    reply to my website, sorry. I'll take it down if you don't want it
    rehosted like that)

    ~nytpu
    --
    Alex // nytpu
    https://nytpu.com/ - gemini://nytpu.com/ - gopher://nytpu.com/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Nicolas Paul Colin de Glocester@Spamassassin@irrt.De to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 21:27:39 2025
    From Newsgroup: comp.lang.ada

    This message is in MIME format. The first part should be readable text,
    while the remaining parts are likely unreadable without MIME-aware tools.

    --708268602-725049973-1756840314=:2785086
    Content-Type: text/plain; CHARSET=UTF-8; format=flowed Content-Transfer-Encoding: QUOTED-PRINTABLE
    Content-ID: <c519de39-4703-8fda-b34c-352ba4adfd54@insomnia247.nl>

    On Tue, 2 Sep 2025, Keith Thompson wrote:
    "> I quote Usenet articles in a way which does not endear me to
    persons. Not everyone reacts in the same way. OC Systems asked me how
    do I draw those boxes.

    Why do you do that?"

    Such a quoting style is correlated with a possibly misguided perception=20
    that a language does not have a quotation mark at the beginning of each=20 intermediate line. Indications that this perception is misguided are=20
    English documents which are supposedly from decades before Ada 83 which do indeed show a "=E2=80=9C" (i.e. an English opening quotation mark) at the=
    =20
    beginning of each intermediate line.

    However I am not interested enough in English and I do not have enough=20
    time to investigate whether or not that is the real way to quote in=20
    English. If one could show me an authoriative document older than the 20th=
    =20
    century on how to write in English which declares so, then it might nudge=
    =20
    me.

    I had not originally believed that drawing rectangles for embedded=20 quotations is annoying, as others used to draw so before me. However,=20 unfortunately these rectangles clearly annoy Mister Thompson. Sorry!

    " It seems like a lot of effort to produce an
    annoying result."

    No effort! As I wrote to OC Systems on
    Date: Wed, 2 Jul 2008 16:34:41 -0400 (EDT)
    long after I wrote an Emacs-Lisp code for these quotations:
    "Thank you for asking. At least so far as I have noticed, you are the
    first person to have asked me that even though I have been using them
    since last year. They are largely created by an Emacs Lisp function
    which I wrote (see far below) to save me labor, [. . .]
    [. . .]
    [. . .] (Emacs Lisp is terrible, but it is commonly available on
    email servers and I was using a buggy Common Lisp program at the time
    so I thought that drawing the boxes in Emacs Lisp might serve as some
    practice for bug fixing in Common Lisp.)

    [. . .]"
    --708268602-725049973-1756840314=:2785086--
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Nicolas Paul Colin de Glocester@Spamassassin@irrt.De to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 21:50:44 2025
    From Newsgroup: comp.lang.ada

    This message is in MIME format. The first part should be readable text,
    while the remaining parts are likely unreadable without MIME-aware tools.

    --708268602-24677815-1756842525=:2785086
    Content-Type: text/plain; CHARSET=UTF-8; format=flowed Content-Transfer-Encoding: QUOTED-PRINTABLE
    Content-ID: <3e1ac8f5-a777-d637-b611-fdc97cdc79e8@insomnia247.nl>

    On Tue, 2 Sep 2025, Alex // nytpu wrote:
    "It is a very thorough collection,"

    Dear Alex,

    False. This collection does not quote all the comp.lang.ada articles=20 referring to "UTF-32" or "UTF32" or "UCS-4" etc. that I read during the=20 previous week, but there is largely no difference in substance in the ones=
    =20
    that I read during the previous week that I decide to not quote. So as to=
    =20
    have a good Subject: header, I had quite some job deciding which article=20
    to press the reply button on. I wanted to reply to
    Subject: Re: Supporting full Unicode
    but I did not because I did not actually quote anything from that thread.

    On Tue, 2 Sep 2025, Alex // nytpu wrote:
    "I liked it."

    Thanks and welcome.

    On Tue, 2 Sep 2025, Alex // nytpu wrote:
    "(Also I didn't think to ask before posting your original message or my=20 reply to
    my website,"

    No need to ask so.

    On Tue, 2 Sep 2025, Alex // nytpu wrote:
    "I'll take it down if you don't want it rehosted like=20
    that)"

    I do not oppose rehosting it.

    Actually, though
    HTTPS://Usenet.Ada-Lang.IO/comp.lang.ada
    is excellent and better than all other comp.lang.ada archives, it=20 unfortunately lacks a few non-spam posts.

    Sinc=C3=A8res salutations.



    Nicolas Paul Colin de Glocester
    --708268602-24677815-1756842525=:2785086--
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 13:02:13 2025
    From Newsgroup: comp.lang.ada

    Nicolas Paul Colin de Glocester <Spamassassin@irrt.De> writes:
    On Tue, 2 Sep 2025, Keith Thompson wrote:
    "> I quote Usenet articles in a way which does not endear me to
    persons. Not everyone reacts in the same way. OC Systems asked me how
    do I draw those boxes.

    Why do you do that?"

    Such a quoting style is correlated with a possibly misguided
    perception that a language does not have a quotation mark at the
    beginning of each intermediate line. Indications that this perception
    is misguided are English documents which are supposedly from decades
    before Ada 83 which do
    indeed show a "“" (i.e. an English opening quotation mark) at the
    beginning of each intermediate line.
    [...]

    I urge you to adopt the universal Usenet convention of preceding
    quoted text from previous articles with "> ". See every other
    followup article in this and other newsgroups for examples.

    Everyone understands it, virtually everyone but you uses it, and
    (almost?) every Usenet client fully supports it. I can think of no
    reason not to use it, unless your goal is for your posts to stand
    out from others in a rather unpleasant way.

    I mentioned before that your scheme seemed like a lot of effort.
    In fact the level of effort is irrelevant.

    You will of course do what you want, and I don't intend to discuss
    it further.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 22:56:12 2025
    From Newsgroup: comp.lang.ada

    On Tue, 2 Sep 2025 10:01:34 -0600, Alex // nytpu wrote:

    ... (UCS-4 has a number of additional differences from UTF-32
    regarding "valid encodings", namely that all valid Unicode
    codepoints (0x0--0x10FFFF inclusive) are allowed in UCS-4 but only
    Unicode scalar values (0x0--0xD7FF and 0xE000--0x10FFFF inclusive)
    are valid in UTF-32) ...

    So what do those codes mean in UCS-4?

    ... and are missing some additional information: a key detail is
    that even with UTF-32 where each Unicode scalar value is held in one
    array element rather than being variable-width like UTF-8/UTF-16,
    you still can't treat them as arbitrary arrays like 7-bit ASCII
    because a grapheme can be made up of multiple Unicode scalar values.
    Even with ASCII characters there's the possibility of combining
    diacritics or such that would break if you split the string between
    them.

    This is why you have “normalization”. <https://www.unicode.org/faq/char_combmark.html>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alex // nytpu@nytpu@example.invalid to comp.lang.ada,fr.comp.lang.ada on Tue Sep 2 18:20:09 2025
    From Newsgroup: comp.lang.ada

    On 9/2/25 4:56 PM, Lawrence D’Oliveiro wrote:
    On Tue, 2 Sep 2025 10:01:34 -0600, Alex // nytpu wrote:
    ... (UCS-4 has a number of additional differences from UTF-32
    regarding "valid encodings", namely that all valid Unicode
    codepoints (0x0--0x10FFFF inclusive) are allowed in UCS-4 but only
    Unicode scalar values (0x0--0xD7FF and 0xE000--0x10FFFF inclusive)
    are valid in UTF-32) ...

    So what do those codes mean in UCS-4?
    Unfortunately, here's where you get more complexity. So there's a
    difference between a valid codepoint/scalar value and an assigned scalar value. The vast majority of valid scalar values are unassigned
    (currently 154,998 characters are standardized out of 1,114,112 possible characters), but everything other than text renderers and normalizers
    should handle them like any other character to allow for at least some
    level of forwards compatibility when new characters are added.

    So in UCS-4 (or any UCS-<>) implementation, they're just treated like unassigned codepoints (that will never be assigned, not that they'd
    know); while they're completely invalid and should not be represented at
    all in UTF-32. Implementations should either error out or replace it
    with the substitution character U+FFFD in order to ensure that it's
    always working with valid UTF-32 (this is what makes the Windows
    character set and Ada's Wide_Strings messy, because they were originally standardized before UTF-16 so to keep backwards compatibility they still support unpaired surrogates so you have to sanitize it yourself to avoid making your UTF-8 encoder or the other software reading your text
    declare the encoding invalid).

    (This whole mess is because UCS-2 and the UCS-2->UTF-16 transition was a hellish mess caused by extreme lack of foresight and it's horrible they saddled everyone, including people not using UTF-16, with this crap.
    UTF-16 and its surrogate pairs is also what's responsible for the
    maximum scalar value being 0x0010_FFFF instead of 0xFFFF_FFFF even
    though UTF-32 and UTF-8 and even goddamn UTF-EBCDIC and the weird
    encoding the Chinese government came up with can all trivially encode
    full 32-bit values)

    This is why you have “normalization”. <https://www.unicode.org/faq/char_combmark.html>
    Still can't just arbitrarily split strings without being careful, there
    are characters that are inherently multi-codepoint (e.g. most emoji
    among others) without the possibility to be reduced to a single
    codepoint like some can. Really, unfortunately, with Unicode you really
    just shouldn't try to make use of an "array" of any fixed-size quantity because with multi-codepoint graphemes and combining characters and such
    it's just not possible.

    Plus conveniently Ada doesn't have routines for normalization, but can't
    hold that against it since neither does any other programming language
    because the lookup tables required are like 20 MiB even when optimized
    for space. (Everyone says to just link to libicu, which also lets you
    get out of needing to keep your program's Unicode tables up-to-date when
    a new Unicode version releases)

    Plus you shouldn't normalize text other than performing actions like
    substring matching, equality tests, or sorting---and even if you
    normalize when performing those, *when possible* you should store the unnormalized original for display/output afterwards. Normalization
    causes lots of semantic information loss because many distinct
    characters are mapped onto one (e.g. non-breaking spaces and zero-width
    spaces are mapped to plain space, mathematical font variants and
    superscripts are mapped to the plain Latin/Greek versions, many
    different languages' characters are mapped to one if the characters
    happen to be visually similar, etc. etc.).

    ~nytpu
    --
    Alex // nytpu
    https://nytpu.com/ - gemini://nytpu.com/ - gopher://nytpu.com/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.ada,fr.comp.lang.ada on Wed Sep 3 04:10:47 2025
    From Newsgroup: comp.lang.ada

    On Tue, 2 Sep 2025 18:20:09 -0600, Alex // nytpu wrote:

    (This whole mess is because UCS-2 and the UCS-2->UTF-16 transition was a hellish mess caused by extreme lack of foresight and it's horrible they saddled everyone, including people not using UTF-16, with this crap.

    I gather the basic problem was that Unicode was originally going to be a fixed-length 16-bit code, and that was that. And so early adopters
    (Windows NT and Java among them), built UCS-2 right into their DNA.

    Until Unicode 2.0, I believe it was, where they went “on second thought, let’s go beyond our original brief and start including all kinds of other things as well” ... and UCS-2 had to become UTF-16 ...

    UTF-16 and its surrogate pairs is also what's responsible for the
    maximum scalar value being 0x0010_FFFF instead of 0xFFFF_FFFF even
    though UTF-32 and UTF-8 and even goddamn UTF-EBCDIC and the weird
    encoding the Chinese government came up with can all trivially encode
    full 32-bit values)

    I wondered about that limit ...

    Plus conveniently Ada doesn't have routines for normalization, but can't
    hold that against it since neither does any other programming language because the lookup tables required are like 20 MiB even when optimized
    for space.

    I think Python has them
    <https://docs.python.org/3/library/unicodedata.html>. But then, on
    platforms with decent package management, that data can be shared with
    other installed packages that require it as well.

    Plus you shouldn't normalize text other than performing actions like substring matching, equality tests, or sorting---and even if you
    normalize when performing those, *when possible* you should store the unnormalized original for display/output afterwards.

    I thought it was always safe to store decomposed versions of everything.
    --- Synchronet 3.21a-Linux NewsLink 1.2