May 30, 2006
The documentation says "\uFFFE and \uFFFF are considered valid by this function, as they are permitted for internal use by an application, but they are not allowed for interchange by the Unicode standard.".

But this does not seem to be propagated to other things:


wchar foo = '\uFFFF';

// DMD output: invalid UTF character \U0000ffff


wchar[] s = "foobar";
wchar ch = 0xFFFF; // OK
s ~= ch;
std.utf.validate(s);

// Error: illegal UTF-16 value


Furthermore, if possible, perhaps all the char, wchar and dchar init`s should be the same illegal value so that (dchar.init == char.init), and other combinations.
June 01, 2006
Chris Miller schrieb am 2006-05-30:

<snip>

> Furthermore, if possible, perhaps all the char, wchar and dchar init`s should be the same illegal value so that (dchar.init == char.init), and other combinations.

That isn't technically possible.

"\u0000" -> isn't illegal
"\u00FF" -> isn't illegal
"\uFFFF" -> is illegal

0xFF (char) -> is illegal
0xFF (wchar) -> isn't illegal

0x0 (char) -> isn't illegal
0x0 (wchar) -> isn't illegal

Thomas