September 10, 2022
On Saturday, 10 September 2022 at 17:05:23 UTC, Walter Bright wrote:

>
> Yes, I made a mistake. There was a decision to remove it, but it just got deferred and then overlooked.'

So, are you going to ignore the almost unanimous feedback from the community again and remove the binary literals anyway?



September 10, 2022
On 9/10/2022 9:18 AM, Timon Gehr wrote:
> o!422 is such a hack,

How so?

> and it does not even (always) work.

You're referring to when it has too many digits, it has to be done as:

  o!"442"

It would be interesting to see a proposal to improve this sort of thing.

> Binary literals are e.g., a GNU C extension and they are in C++14, so clearly people see an use for them.

I implemented them back in the 80s as an extension, and nobody commented on them. I never found a use. As for seeing a use, seeing a use for them and actually using them are different things.

D originally was embeddable in html. The compiler was able to extract it from html files. I saw a use for it, but never found one. It was dropped. Nobody commented on that, either.


>> Let's simplify D.
> I really don't understand why you seem to think removing simple and convenient lexer features that behave exactly as expected in favor of overengineered Phobos templates that have weird corner cases and are orders of magnitude slower to compile is a meaningful simplification of D. It utterly makes no sense to me.

The idea is to have a simple core language, and have a way that users can add features via the library. For example, user-defined literals are a valuable feature. C++ added specific syntax for them. D has user-defined literals as fallout from the template syntax.

User-defined literals in D are indeed an order of magnitude slower than builtin ones. But that only matters if one is using a lot of them. Like having files filled with them. How often does that happen?

The Phobos implementation of octal is indeed overengineered, as I mentioned in another post here. Phobos in general has been overengineered, but that's not a fault of the language. I suppose I should submit a PR to fix the octal template implementation.


> Let's simplify D in a way that actually positively impacts the user experience,
> for example by getting rid of weird corner cases and arbitrary limitations. Of
> course, that requires actual design work and sometimes even nontrivial compiler
> improvements, which is a bit harder than just deleting a few lines of code in
> the lexer and then adding ten times that amount to Phobos.

We do this all the time.
September 10, 2022
What's ironic about this discussion is the exact opposite happened with D bitfields.

After implementing it for C, I realized that we could add bitfields to D by simply turning the existing implementation on. The code was already there, it was already supported and debugged.

The other side preferred a template solution that didn't have quite the simple syntax that the C solution had, whereas I thought bitfields would be used enough to justify the simpler builtin syntax.

Another irony was that in turning it on for D, it exposed a serious bug that the extensive tests I wrote for the C side had missed.
September 11, 2022
A lot of us kept quiet about bitfields being turned on.

Honestly, if we are already paying the price turning them on (with a DIP of course), it is brilliant.

In you saying about irony and testing failing, I'm reminded of the fact that Unicode in symbols are correctly not being tested with export. Which if tested would result in linker errors. How fun!
September 10, 2022
On 9/10/2022 12:03 AM, Daniel N wrote:
> This has an obvious visual meaning but in hex it would be hard to read.
> 0b111111
> 0b100001
> 0b100001
> 0b111111

That was the original motivation back in the 80s. But I've since realized that this works much better:

  XXXXXX
  X....X
  X....X
  XXXXXX

Wrap it in a string literal, and write a simple parser to translate it binary data.

Like what I did here:

https://github.com/dlang/dmd/blob/master/compiler/src/dmd/backend/disasm86.d#L3645

which makes it really easy to add test cases to the disassembler. Well worth the extra effort to make a tiny parser for it.
September 10, 2022
On 9/10/2022 1:19 AM, Max Samukha wrote:
> Bit flags are easier to read as binary grouped in nibbles. For example:
> 
> enum ubyte[16] ledDigits =
>      [
>          0b0011_1111, // 0
>          0b0000_0110, // 1
>          0b0101_1011, // 2
>          0b0100_1111, // 3
>          0b0110_0110, // 4
>          0b0110_1101, // 5
>          0b0111_1101, // 6
>          0b0000_0111, // 7
>          0b0111_1111, // 8
>          0b0110_1111, // 9
>          0b0111_0111, // A
>          0b0111_1100, // b
>          0b0011_1001, // C
>          0b0101_1110, // d
>          0b0111_1001, // E
>          0b0111_0001, // F
>      ];
> 
> Those are the bit masks for a 7-segment display. Of course, you could define them by or'ing enum flags or translating into hex, or use a template, but that would be annoying.

Interesting that you brought up 7-segment display data, as I've actually written that stuff for embedded systems, and once again as a demonstration for the ABEL programming language.

A couple things about it:

1. The visual representation of the binary doesn't have any correlation with how the display looks.

2. It's a one-off. Once it's written, it never changes.

3. Writing it in hex isn't any difficulty for 10 entries.

A more compelling example would be, say, a character generator ROM, which I've also done.

   0b01110
   0b10001
   0b11111
   0b10001
   0b10001

and you'll be doing a couple hundred of those at least. Wouldn't this be more appealing:

"
   .XXX.
   X...X
   XXXXX
   X...X
   X...X
"

? Then write a trivial parser, and use CTFE to generate the binary data for the table. Of course, such a parser could be used over and over for other projects.
September 10, 2022
On 9/10/2022 8:21 AM, 0xEAB wrote:
> But if we assume one’s working with binary flags or something similar (which probably was the reason to use binary literals in the first place), why would we write them in a different notation?

I use binary flags all the time:

    enum Flag  = {
      CARRY    =    1,
      SIGN     =    2,
      OVERFLOW =    4,
      PARITY   =    8,
      ZERO     = 0x10,
      ... }

but as mnemonics.


> To give an example:
> I can’t translate hex literals to their binary form in my head (in reasonable time).

I understand. I suppose it's like learning to touch-type. Takes some effort at first, but a lifetime of payoff. There's no way to avoid working with binary data without getting comfortable with hex.

(In 8th grade I took a 2 week summer school course in touch typing. The typewriters were mechanical monsters, you really had to hammer the keys to get it to work, but that helped build the muscle memory. Having a lifetime of payoff from that was soooo worth the few hours.)

Other things worth taking the time to get comfortable with:

1. 2-s complement arithmetic
2. how floating point works
3. pointers


> And I never even had to do so – except for an exam or two at school.
> Wanna know how I did it? – I wrote down the `0=0000`…`1=0001`…`F=1111` table…

That's how I learned the multiplication tables. I'd write out the matrix by hand.
September 10, 2022
On Saturday, 10 September 2022 at 18:14:38 UTC, Walter Bright wrote:
> 
> and you'll be doing a couple hundred of those at least. Wouldn't this be more appealing:
>
> "
>    .XXX.
>    X...X
>    XXXXX
>    X...X
>    X...X
> "
>
> ? Then write a trivial parser, and use CTFE to generate the binary data for the table. Of course, such a parser could be used over and over for other projects.

First,the above strings as it is only work for uint8; while binary literals work for all  integer sizes.

Second, why not provide the above "trivial parser" into std lib (so nobody need to reinvent the wheel), and ask users use it over years to discover unseen problems and get feedback before deprecate / remove binary literals?


September 10, 2022

On 9/10/22 1:43 PM, Walter Bright wrote:

>

On 9/10/2022 9:18 AM, Timon Gehr wrote:

> >

Binary literals are e.g., a GNU C extension and they are in C++14, so clearly people see an use for them.

I implemented them back in the 80s as an extension, and nobody commented on them. I never found a use. As for seeing a use, seeing a use for them and actually using them are different things.

I just used them a couple months ago:

https://github.com/schveiguy/poker/blob/master/source/poker/hand.d#L261

This was so much easier to comprehend than the equivalent hex.

-Steve

September 10, 2022
On Saturday, 10 September 2022 at 18:44:14 UTC, mw wrote:
> On Saturday, 10 September 2022 at 18:14:38 UTC, Walter Bright
> 
> Second, why not provide the above "trivial parser" into std lib (so nobody need to reinvent the wheel), and ask users use it over years to discover unseen problems and get feedback before deprecate / remove binary literals?

And in general, can we provide alternative solutions or migrating tools before deprecating / removing language features that many users depend on?

E.g Python provide command line tool `2to3`.