September 10, 2022
On 9/10/2022 12:15 PM, Patrick Schluter wrote:
> 8080/8085/Z80 opcodes when expressed in octal are much easier to handle. 

I've never seen 8080/Z80 opcodes expressed in octal. I know that the modregrm byte for the x86 is 2,3,3, but I deal with that by using an inline function to manipulate it. It never occurred to me to use octal. Interesting.

ubyte modregrm (uint m, uint r, uint rm)
    { return cast(ubyte)((m << 6) | (r << 3) | rm); }

https://github.com/dlang/dmd/blob/master/compiler/src/dmd/backend/code_x86.d#L508

Of course, bit fields might be better :-)
September 10, 2022
On 9/10/2022 11:44 AM, mw wrote:
> Second, why not provide the above "trivial parser" into std lib (so nobody need to reinvent the wheel)

Indeed, why not! Want to give it a go?

September 10, 2022
On 9/10/2022 12:05 PM, Don Allen wrote:
> But if I were starting with an empty editor buffer, would I choose D? Especially to write a garden-variety application rather than bashing hardware registers? Perhaps not. Some of that would be simply that a higher-level language would be more suitable, e.g., Haskell or Scheme, both personal favorites. But some of that would be due to the hangover from C and C++ that D stills exhibits. My opinion: C was a bad language in 1970 and it is horrifying today. C++? Words fail me, unless they are scatological. I think the more D can detach itself from its C heritage and emphasize modern programming language practice (in other words, take advantage of what we have learned in the last 52 years), the better.

What's amusing is when I embarked on implementing ImportC, I rediscovered all the things I disliked about it that had been fixed in D.

September 10, 2022
On Saturday, 10 September 2022 at 19:53:11 UTC, Walter Bright wrote:
> Of course, bit fields might be better :-)

Not for this! At least not the godawful C flavor of bitfields.

As you know, the actual layout is implementation defined. Which means it is of extremely limited value for hardware interop.

This is one thing the Phobos template at least defines (though it isn't great either).

What I'd like to see for D bitfields is a *good* system, that defines these things in a useful manner. C's things is ok for packing bits into available space, to make private structs smaller. But its undefined bits make it unsuitable for any kind of public api or hardware matching.
September 10, 2022

On Saturday, 10 September 2022 at 17:48:15 UTC, Walter Bright wrote:

>

What's ironic about this discussion is the exact opposite happened with D bitfields.

There's no irony, the two situations are not comparable.

With bitfields, the current situation is we have std.bitmanip in Phobos, which has its own simple layout scheme, doesn't break meta programming, but has ugly syntax.

The proposal was to make D additionally inherit C's bitfields, which has C's platform-dependent layout scheme, breaks meta programming, but has nice syntax.

With binary literals, the current situation is we have a perfectly fine implementation in the lexer, with the proposal to replace it with a Phobos template to do the exact same but with worse user experience.

What is ironic though, is that you were against deprecating q"EOS text EOS" strings, dismissing complexity concerns and siding with user experience. Now you're doing the reverse.

September 10, 2022
On Saturday, 10 September 2022 at 17:48:15 UTC, Walter Bright wrote:
> What's ironic about this discussion is the exact opposite happened with D bitfields.

C bitfields are legitimately *awful*. That discussion was about that particular definition (or lack thereof), not the concept as a whole.

Just like how C's octal literals suck but some octal literals are ok, C's bitfields suck but other bitfields could do well.

This was actually going to be the topic of my blog this week but i never got around to finishing it. The basic idea I'd like to see though is that alignment, size, and layout are all defined in a separately reflectable definition.
September 10, 2022
On Saturday, 10 September 2022 at 19:56:14 UTC, Walter Bright wrote:
> On 9/10/2022 11:44 AM, mw wrote:
>> Second, why not provide the above "trivial parser" into std lib (so nobody need to reinvent the wheel)
>
> Indeed, why not! Want to give it a go?


Not me.

I'm fine with: (e.g. uint32)

0b0000_0110_0110_0110_0110_0110_0110_0110

I don't see any advantage of:

"...._.XX._.XX._.XX._.XX._.XX._.XX._.XX.”

over the binary literals, and it worth the effort.

And I don't think the latter is more readable than the former.


What I'm saying is that: if you insist on removing binary (I hope you not), then [why not provide ...] the migration tool.
September 10, 2022
On Saturday, 10 September 2022 at 20:12:05 UTC, mw wrote:
> I'm fine with: (e.g. uint32)
>

Actually I just realized this is a good example, so I align this two literals here:

```
0b0000_0110_0110_0110_0110_0110_0110_0110
 "...._.XX._.XX._.XX._.XX._.XX._.XX._.XX."

```

Can we have a poll here? which is more readable for uint32?

1) the binary literal
2) the string literal


And I have another argument for *NOT* deprecating binary literals, let's talk about int binary literal => string repr => int round trip: many of people (including me) write some kind of code generators / formatters sometime, so the builtin "%b" format char:

```
  writeln(format("%b", 0xcc));  // output 11001100

```

For the round trip, it can be easily convert back to int again.

Now, how much more work I need to do to make those string "XX..XX.." works in such round-trip?!

September 10, 2022
On Saturday, 10 September 2022 at 20:43:29 UTC, mw wrote:
> ...
> 0b0000_0110_0110_0110_0110_0110_0110_0110
>  "...._.XX._.XX._.XX._.XX._.XX._.XX._.XX."
> ...

For me in this example, the former (Binary representation) without doubt is better.

Matheus.


September 10, 2022

On Saturday, 10 September 2022 at 20:12:05 UTC, mw wrote:

>

On Saturday, 10 September 2022 at 19:56:14 UTC, Walter Bright wrote:

>

On 9/10/2022 11:44 AM, mw wrote:

>

Second, why not provide the above "trivial parser" into std lib (so nobody need to reinvent the wheel)

Indeed, why not! Want to give it a go?

Not me.

I'm fine with: (e.g. uint32)

0b0000_0110_0110_0110_0110_0110_0110_0110

I don't see any advantage of:

"...._.XX._.XX._.XX._.XX._.XX._.XX._.XX.”

over the binary literals, and it worth the effort.

And I don't think the latter is more readable than the former.

What I'm saying is that: if you insist on removing binary (I hope you not), then [why not provide ...] the migration tool.

ulong parse(const char[] data){
    ulong result = 0;
    foreach(ch; data){
        switch(ch){
            case '.':
                result <<= 1;
                break;
            case 'x': case 'X':
                result <<= 1;
                result |= 1;
                break;
            case ' ': case '\t': case '\n': case '\r':
            case '_':
                continue;
            default:
                throw new Exception("oops");
        }
    }
    return result;
}

static assert("...".parse == 0b000);
static assert("..x".parse == 0b001);
static assert(".x.".parse == 0b010);
static assert(".xx".parse == 0b011);
static assert("x..".parse == 0b100);
static assert("x.x".parse == 0b101);
static assert("xx.".parse == 0b110);
static assert("xxx".parse == 0b111);
static assert("
        xxx
        x.x
        xxx
        ".parse == 0b111_101_111);
static assert("x.x.__x__.x.x".parse == 0b1010__1__0101);
private bool does_throw(const char[] data){
    bool caught = false;
    try { const _ = data.parse; }
    catch (Exception e){ caught = true; }
    return caught;
}
static assert(does_throw("x0x"));
static assert(does_throw("1010"));
static assert(does_throw("1010"));