Jump to page: 1 2 3
Thread overview
How to print unicode characters (no library)?
Dec 26, 2021
rempas
Dec 26, 2021
Adam Ruppe
Dec 26, 2021
max haughton
Dec 26, 2021
H. S. Teoh
Dec 27, 2021
rempas
Dec 27, 2021
Kagamin
Dec 27, 2021
rempas
Dec 27, 2021
Kagamin
Dec 27, 2021
rempas
Dec 27, 2021
Adam D Ruppe
Dec 27, 2021
H. S. Teoh
Dec 27, 2021
Adam D Ruppe
Dec 27, 2021
H. S. Teoh
Dec 28, 2021
rempas
Dec 28, 2021
Adam D Ruppe
Dec 28, 2021
rempas
Dec 27, 2021
Kagamin
Dec 28, 2021
rempas
Dec 27, 2021
Adam D Ruppe
Dec 28, 2021
rempas
Dec 28, 2021
ag0aep6g
Dec 28, 2021
Adam D Ruppe
Dec 27, 2021
Era Scarecrow
Dec 28, 2021
rempas
Dec 28, 2021
Adam D Ruppe
Dec 28, 2021
rempas
Dec 28, 2021
rempas
Dec 28, 2021
Patrick Schluter
December 26, 2021

Hi! I'm trying to print some Unicode characters using UTF-8 (char), UTF-16 (wchar) and UTF-32 (dchar). I want to do this without using any library by using the "write" system call directly with 64-bit Linux. Only the UTF-8 solution seems to be working as expected. The other solutions will not print the unicode characters (I'm using an emoji in my case for example). Another thing I noticed is the size of the strings. From what I know (and tell me if I'm mistaken), UTF-16 and UTF-32 have fixed size lengths for their characters. UTF-16 uses 2 bytes (16 bits) and UTF-32 uses 4 bytes (32 bits) without treating any character specially. This doesn't seem to be the case for me however. Consider my code:

import core.stdc.stdio;

void exit(ulong code) {
  asm {
    "syscall"
    : : "a" (60), "D" (code);
  }
}

void write(T)(int fd, const T buf, ulong len) {
  asm {
    "syscall"
    : : "a" (1), "D" (1), "S" (buf), "d" (len)
    : "memory", "rcx";
  }
}

extern (C) void main() {
  string  utf8s  = "Hello 😂\n";
  write(1, utf8s.ptr, utf8s.length);

  wstring utf16s = "Hello 😂\n"w;
  write(1, utf16s.ptr, utf16s.length * 2);

  dstring utf32s = "Hello 😂\n"d;
  write(1, utf32s.ptr, utf32s.length * 4);

  printf("\nutf8s.length = %lu\nutf16s.length = %lu\nutf32s.length = %lu\n",
      utf8s.length, utf16s.length, utf32s.length);

  exit(0);
}

And its output:

Hello 😂
Hello =��
Hello �

utf8s.length = 11
utf16s.length = 9
utf32s.length = 8

Now the UTF-8 string will report 11 characters and print them normally. So it treats every character that is 127 or less as if it was an ascii character and uses 1-byte for it. Characters above that range, are either a 2-byte or 4-byte unicode characters. So it works as I expected based on what I've read/saw for UTF-8 (now I understand why everyone loves it, lol :P)!

Now what about the other two? I was expecting UTF-16 to report 16 characters and UTF-32 to report 32 characters. Also why the characters are not shown as expected? Isn't the "write" system call just writing a sequence of characters without caring which they are? So if I just give it the right length, shouldn't it just work? I'm pretty much sure that this is not as I expect it and it doesn't work like that. Anyone has an idea?

December 26, 2021

On Sunday, 26 December 2021 at 20:50:39 UTC, rempas wrote:

>

I want to do this without using any library by using the "write" system call directly with 64-bit Linux.

write just transfers a sequence of bytes. It doesn't know nor care what they represent - that's for the receiving end to figure out.

>

know (and tell me if I'm mistaken), UTF-16 and UTF-32 have fixed size lengths for their characters.

You are mistaken. There's several exceptions, utf-16 can come in pairs, and even utf-32 has multiple "characters" that combine onto one thing on screen.

I prefer to think of a string as a little virtual machine that can be run to produce output rather than actually being "characters". Even with plain ascii, consider the backspace "character" - it is more an instruction to go back than it is a thing that is displayed on its own.

>

Now the UTF-8 string will report 11 characters and print them normally.

This is because the receiving program treats them as utf-8 and runs it accordingly. Not all terminals will necessarily do this, and programs you pipe to can do it very differently.

>

Now what about the other two? I was expecting UTF-16 to report 16 characters and UTF-32 to report 32 characters.

The [w|d|]string.length function returns the number of elements in there, which is bytes for string, 16 bit elements for wstring (so bytes / 2), or 32 bit elements for dstring (so bytes / 4).

This is not necessarily related to the number of characters displayed.

>

Isn't the "write" system call just writing a sequence of characters without caring which they are?

yes, it just passes bytes through. It doesn't know they are supposed to be characters...

December 26, 2021

On Sunday, 26 December 2021 at 21:22:42 UTC, Adam Ruppe wrote:

>

On Sunday, 26 December 2021 at 20:50:39 UTC, rempas wrote:

>

[...]

write just transfers a sequence of bytes. It doesn't know nor care what they represent - that's for the receiving end to figure out.

>

[...]

You are mistaken. There's several exceptions, utf-16 can come in pairs, and even utf-32 has multiple "characters" that combine onto one thing on screen.

I prefer to think of a string as a little virtual machine that can be run to produce output rather than actually being "characters". Even with plain ascii, consider the backspace "character" - it is more an instruction to go back than it is a thing that is displayed on its own.

>

[...]

This is because the receiving program treats them as utf-8 and runs it accordingly. Not all terminals will necessarily do this, and programs you pipe to can do it very differently.

>

[...]

The [w|d|]string.length function returns the number of elements in there, which is bytes for string, 16 bit elements for wstring (so bytes / 2), or 32 bit elements for dstring (so bytes / 4).

This is not necessarily related to the number of characters displayed.

>

[...]

yes, it just passes bytes through. It doesn't know they are supposed to be characters...

I think that mental model is pretty good actually. Maybe a more specific idea exists, but this virtual machine concept does actually explain to the new programmer to expect dragons - or at least that the days of plain ASCII are long gone (and never happened, e.g. backspace as you say)

December 26, 2021
On Sun, Dec 26, 2021 at 11:45:25PM +0000, max haughton via Digitalmars-d-learn wrote: [...]
> I think that mental model is pretty good actually. Maybe a more specific idea exists, but this virtual machine concept does actually explain to the new programmer to expect dragons - or at least that the days of plain ASCII are long gone (and never happened, e.g. backspace as you say)

In some Unix terminals, backspace + '_' causes a character to be underlined. So it's really a mini VM, not just pure data. So yeah, the good ole ASCII days never happened. :-D


T

-- 
This sentence is false.
December 27, 2021

On Sunday, 26 December 2021 at 21:22:42 UTC, Adam Ruppe wrote:

>

write just transfers a sequence of bytes. It doesn't know nor care what they represent - that's for the receiving end to figure out.

Oh, so it was as I expected :P

>

You are mistaken. There's several exceptions, utf-16 can come in pairs, and even utf-32 has multiple "characters" that combine onto one thing on screen.

Oh yeah. About that, I wasn't given a demonstration of how it works so I forgot about it. I saw that in Unicode you can combine some code points to get different results but I never saw how that happens in practice. If you combine two code points, you get another different graph. So yeah that one thing I don't understand...

>

I prefer to think of a string as a little virtual machine that can be run to produce output rather than actually being "characters". Even with plain ascii, consider the backspace "character" - it is more an instruction to go back than it is a thing that is displayed on its own.

Yes, that's a great way of seeing it. I suppose that this all happens under the hood and it is OS specific so why have to know how the OS we are working with works under the hood to fully understand how this happens. Also the idea of some "characters" been "instructions" is very interesting. Now from what I've seen, non-printable characters are always instructions (except for the "space" character) so another way to think about this is by thinking that every character can have one instruction and this is either to get written (displayed) in the file or to do another modification in the text but without getting displayed itself as a character. Of course, I don't suppose that's what happening under the hood but it's an interesting way of describe it.

>

This is because the receiving program treats them as utf-8 and runs it accordingly. Not all terminals will necessarily do this, and programs you pipe to can do it very differently.

That's pretty interesting actually. Terminals (and don't forget shells) are programs themselves so they choose the encoding themselves. However, do you know what we do from cross compatibility then? Because this sounds like a HUGE mess real world applications

>

The [w|d|]string.length function returns the number of elements in there, which is bytes for string, 16 bit elements for wstring (so bytes / 2), or 32 bit elements for dstring (so bytes / 4).

This is not necessarily related to the number of characters displayed.

I don't understand that. Based on your calculations, the results should have been different. Also how are the numbers fixed? Like you said the amount of bytes of each encoding is not always standard for every character. Even if they were fixed this means 2-bytes for each UTF-16 character and 4-bytes for each UTF-32 character so still the numbers doesn't make sense to me. So still the number of the "length" property should have been the same for every encoding or at least for UTF-16 and UTF-32. So are the sizes of every character fixed or not?

Damn you guys should got paid for the help you are giving in this forum

December 27, 2021
On Sunday, 26 December 2021 at 23:57:47 UTC, H. S. Teoh wrote:
> In some Unix terminals, backspace + '_' causes a character to be underlined. So it's really a mini VM, not just pure data. So yeah, the good ole ASCII days never happened. :-D
>
>
> T
>
How can you do that? I'm trying to print the codes for them but it doesn't work. Or you cannot choose to have this behavior and there are only some terminals that support this?
December 27, 2021

D strings are plain arrays without any text-specific logic, the element is called code unit, which has a fixed size, and the array length specifies how many elements are in the array. This model is most adequate for memory correctness, i.e. it shows what takes how much memory and where it will fit. D doesn't impose fixed interpretations like characters or code points, because there are many of them and neither is the correct one, you need one or another in different situations. Linux console one example of such situation: it doesn't accept characters or code points, it accepts utf8 code units, using anything else is an error.

December 27, 2021
On Monday, 27 December 2021 at 07:29:05 UTC, rempas wrote:
> How can you do that? I'm trying to print the codes for them but it doesn't work. Or you cannot choose to have this behavior and there are only some terminals that support this?

Try it on https://en.wikipedia.org/wiki/Teletype_Model_33
December 27, 2021

On Monday, 27 December 2021 at 09:29:38 UTC, Kagamin wrote:

>

D strings are plain arrays without any text-specific logic, the element is called code unit, which has a fixed size, and the array length specifies how many elements are in the array. This model is most adequate for memory correctness, i.e. it shows what takes how much memory and where it will fit. D doesn't impose fixed interpretations like characters or code points, because there are many of them and neither is the correct one, you need one or another in different situations. Linux console one example of such situation: it doesn't accept characters or code points, it accepts utf8 code units, using anything else is an error.

So should I just use UTF-8 only for Linux? What about other operating systems? I suppose Unix-based OSs (maybe MacOS as well if I'm lucky) work the same as well. But what about Windows? Unfortunately I have to support this OS too with my library so I should know. If you know and you can tell me of course...

December 27, 2021
On Monday, 27 December 2021 at 07:12:24 UTC, rempas wrote:
> Oh yeah. About that, I wasn't given a demonstration of how it works so I forgot about it. I saw that in Unicode you can combine some code points to get different results but I never saw how that happens in practice.

The emoji is one example, the one you posted is two code points. Some other common ones are accented letters will SOMETIMES - there's exceptions - be created by the letter followed by an accent mark.

Some of those complicated emojis are several points with optional changes. Like it might be "woman" followed by "skin tone 2" 👩🏽. Some of them are "dancing" followed by "skin tone 0" followed by "male" and such.

So it displays as one thing, but it is composed by 2 or more code points, and each code point might be composed from several code units, depending on the encoding.

Again, think of it more as a little virtual machine building up a thing. A lot of these are actually based on combinations of old typewriters and old state machine terminal hardware.

Like the reason "a" followed by "backspace" followed by "_" - SOMETIMES, it depends on the receiving program, this isn't a unicode thing - might sometimes be an underlined a because of think about typing that on a typewriter with a piece of paper.

The "a" gets stamped on the paper. Backspace just moves back, but since the "a" is already on the paper, it isn't going to be erased. So when you type the _, it gets stamped on the paper along with the a. So some programs emulate that concept.

The emoji thing is the same basic idea (though it doesn't use backspace): start by drawing a woman, then modify it with a skin color. Or start by drawing a person, then draw another person, then add a skin color, then make them female, and you have a family emoji. Impossible to do that stamping paper, but a little computer VM can understand this and build up the glyph.

> Yes, that's a great way of seeing it. I suppose that this all happens under the hood and it is OS specific so why have to know how the OS we are working with works under the hood to fully understand how this happens.
9
Well, it isn't necessarily OS, any program can do its own thing. Of course, the OS can define something: Windows, for example, defines its things are UTF-16, or you can use a translation layer which does its own things for a great many functions. But still applications might treat it differently.

For example, the xterm terminal emulator can be configured to use utf-8 or something else. It can be configured to interpret them in a way that emulated certain old terminals, including ones that work like a printer or the state machine things.

> However, do you know what we do from cross compatibility then? Because this sounds like a HUGE mess real world applications

Yeah, it is a complete mess, especially on Linux. But even on Windows where Microsoft standardized on utf-16 for text functions, there's still weird exceptions. Like writing to the console vs piping to an application can be different. If you've ever written a single character to a windows pipe and seen different reults than if you wrote two, now you get an idea why.... it is trying to auto-detect if it is two-byte characters or one-byte streams.

I wrote a little bit about this on my public blog: http://dpldocs.info/this-week-in-d/Blog.Posted_2019_11_25.html

Or view the source of my terminal.d to see some of the "fun" in decoding all this nonsense.

http://arsd-official.dpldocs.info/arsd.terminal.html

The module there does a lot more than just the basics, but still most the top half of the file is all about this stuff. Mouse input might be encoded as utf characters, then you gotta change the mode and check various detection tricks. Ugh.

> I don't understand that. Based on your calculations, the results should have been different. Also how are the numbers fixed? Like you said the amount of bytes of each encoding is not always standard for every character. Even if they were fixed this means 2-bytes for each UTF-16 character and 4-bytes for each UTF-32 character so still the numbers doesn't make sense to me.

They're not characters, they're code points. Remember, multiple code points can be combined to form one character on screen.

Let's look at:

"Hello 😂\n";

This is actually a series of 8 code points:

H, e, l, l, o, <space>, <crying face>, <new line>

Those code points can themselves be encoded in three different ways:

dstring: encodes each code point as a single element. That's why dstring there length is 8. Each *element* of this though is 32 bits which you see if you cast it to ubyte[], the length in bytes is 4x the length of the dstring, but dstring.length returns the number of units, not the number of bytes.

So here one unit = one point, but remember each *point* is NOT necessarily anything you see on screen. It represents just one complete instruction to the VM.

wstring: encodes each code point as one or two elements. If its value is in the lower half of the space (< 64k about), it gets one element. If it is in the upper half (> 64k) it gets two elements, one just saying "the next element should be combined with this one".

That's why its length is 9. It kinda looks like:

H, e, l, l, o, <space>, <next element is a point in the upper half of the space>, <crying face>, <new line>

That "next element" unit is an additional element that is processed to figure out which points we get (which, again, are then feed into the VM thingy to be executed to actually produce something on string).

So when you see that "next element is a point..." thing, it puts that in a buffer and pulls another element off the stream to produce the next VM instruction. After it comes in, that instruction gets executed and added to the next buffer.

Each element in this array is 16 bits, meaning if you cast it to ubyte[], you'll see the length double.

Finally, there's "string", which is utf-8, meaning each element is 8 bits, but again, there is a buffer you need to build up to get the code points you feed into that VM.

Like we saw with 16 bits, there's now additional elements that tell you when a thing goes over. Any value < 128 gets a single element, then the next set gets two elements you do some bit shifts and bitwise-or to recombine, then another set with three elements and even a set with four elements. The first element tells you how many more elements you need to build up the point buffer.

H, e, l, l, o, <space>, <next point is combined by these bits PLUS THREE MORE elements>, <this is a work-in-progress element and needs two more>, <this is a work-in-progress element and needs one more>, <this is the final work-in-progress element>, <new line>


And now you see why it came to length == 11 - that emoji needed enough bits to build up the code point that it had to be spread across 4 bytes.

Notice how each element here told you how many elements are left. This is encoded into the bit pattern and is part of why it took 4 elements instead of just three; there's some error-checking redundancy in there. This is a nice part of the design allowing you to validate a utf-8 stream more reliably and even recover if you jumped somewhere in the middle of a multi-byte sequence.

But anyway, that's kinda an implementation detail - the big point here is just that each element of the string array has pieces it needs to recombine to make the unicode code points. Then, the unicode code points are instructions that are fed into a VM kind of thing to actually produce output, and this will sometimes vary depending on what the target program doing the interpreting is.

So the layers are:

1) bytes build up into string/wstring/dstring array elements (aka "code units")
2) those code unit element arrays are decoded into code point instructions
3) those code point instructions are run to produce output.

(or of course when you get to a human reader, they can interpret it differently too but obviously human language is a whole other mess lol)
« First   ‹ Prev
1 2 3