Thread overview
A signed 1-bit type?
Sep 21
monkyyy
Sep 22
monkyyy
Sep 22
ryuukk_
September 19

I don’t know if what I’m going to say is trivial or interesting, but I had this thought only recently, thinking about the design of an integer type system.

In Visual Basic (and probably more Basic dialects), booleans convert to integers as False → 0 and True → −1. You read that correctly, it’s minus one. A weird choice, isn’t it? But could it come from a basic principle?

Yes. If we think of booleans as 1-bit numeric types, it boils down to the question of mere signedness. If we assume 2’s complement, a signed 1-bit type has the values −1 and 0; that’s how 2’s complement works. Of course an unsigned 1-bit type has the values 0 and 1.

I cannot answer you why you’d want a signed 1-bit type, though.

D’s booleans, however, are unsigned integer types.

September 20
On 9/19/2023 5:37 AM, Quirin Schroll wrote:
> I cannot answer you why you’d want a signed 1-bit type, though.

Basic doing that is an aberration.

> D’s booleans, however, are unsigned integer types.

Yup. Changing that would break an unknown amount of code.
September 21
On Thursday, 21 September 2023 at 06:17:16 UTC, Walter Bright wrote:
>> D’s booleans, however, are unsigned integer types.
>
> Yup. Changing that would break an unknown amount of code.

Correctness really should come first; it wont be nearly as big as a breaking change as safe by default and we all know how good an idea that is.
September 21

On Thursday, 21 September 2023 at 06:17:16 UTC, Walter Bright wrote:

>

On 9/19/2023 5:37 AM, Quirin Schroll wrote:

>

I cannot answer you why you’d want a signed 1-bit type, though.

Basic doing that is an aberration.

>

D’s booleans, however, are unsigned integer types.

Yup. Changing that would break an unknown amount of code.

I’m not saying it should be changed, but maybe a signed 1-bit type could be added, as the signed equivalent of our unsigned 1-bit type.

No, not really; but I can’t but laugh at the idea of a signed 1-bit type and that Basic actually went with it. And I wanted to share this with all of you.

September 21

On Tuesday, 19 September 2023 at 12:37:59 UTC, Quirin Schroll wrote:

>

In Visual Basic (and probably more Basic dialects), booleans convert to integers as False → 0 and True → −1. You read that correctly, it’s minus one. A weird choice, isn’t it? But could it come from a basic principle?

I did some searching, and this convention goes back at least as far as 8-bit Microsoft Basic [1], though not all the way to the original 1964 version of Dartmouth Basic [2].

I think a more likely explanation is that -1 was chosen because its binary representation is the bitwise inverse of 0. This allows the language to use the same operator for both bitwise and logical "not". Given how scarce memory was at the time, space-saving tricks like this were probably hard to pass up.

[1] https://archive.org/details/c64-programmer-ref
[2] https://www.dartmouth.edu/basicfifty/basicmanual_1964.pdf

September 21
On Thursday, September 21, 2023 12:55:42 AM MDT monkyyy via Digitalmars-d wrote:
> On Thursday, 21 September 2023 at 06:17:16 UTC, Walter Bright
>
> wrote:
> >> D’s booleans, however, are unsigned integer types.
> >
> > Yup. Changing that would break an unknown amount of code.
>
> Correctness really should come first; it wont be nearly as big as a breaking change as safe by default and we all know how good an idea that is.

And what would do you expect a 1-bit boolean type would buy us? For most code, the encoding of bool is irrelevant.

IMHO, if there's a problem, it's that bool is treated as an integer type at all, meaning that you can pass a bool to a function that takes an integer type without casting it and that 0 and 1 can be passed to a function that takes bool without casting them (which can become particularly annoying with Value-Range Propagation, because then something like foo(1) will take the bool overload of foo rather than the int one).

However, we've argued about this here in the past, and Walter is basically so used to treating bool as an integer type (which can be useful for code doing bit operations and math) that I'm not sure that he even really understood why many of us objected to the idea of treating bool as an integer type (and even if he fully understood, he did not agree). I guess that it comes from him having a background in low-level C where doing bitwise stuff with bools is normal, whereas someone who's dealt more with languages that treat bool as a non-integer type is much more likely to be very unhappy with the idea of bool being treated like an integer type. VRP does make the problem worse than it would be in other languages though, since it results in more implicit type conversions, and that's what caused a lot of the previous discussion on the matter IIRC.

But either way, the underlying implementation of bool doesn't really affect either approach. Whether it's essentially a byte where 0 is false and all other values are true, whether it's a bit where 0 is false and 1 is true, or whether it's something else entirely with an opaque implementation doesn't matter at all if bool is not treated as an integer type. And if it is treated as an integer type, then whether it's a bit or a byte really doesn't matter much (it'll usually be promoted to int for math anyway), and the current implementation follows what C has, which is good for compatibility.

If we did change how bool worked, it would probably be to simply make it not implicitly convert to and from integer types (as has been discussed in the past), but there wouldn't be any need to change how it's actually implemented for that to work. It would just be making it so that without casting, bool would not convert to and from integer types, which would fix certain classes of bugs but make some code doing bitwise operations more tedious (and potentially more bug-prone). Switching to a bit implementation wouldn't help any of that. But regardless, at this point, I think that it's pretty clear that D's bool is not going to change, because Walter is very happy with how it currently works, and it's highly unlikely that someone is going to come up with an argument good enough to get him to change it and break any existing code that actually wants to treat bool as integer type. Much as I'd personally like to see bool changed with regards to implicit conversions, I think that that ship has long since sailed.

- Jonathan M Davis




September 22
On Friday, 22 September 2023 at 02:54:44 UTC, Jonathan M Davis wrote:
> On Thursday, September 21, 2023 12:55:42 AM MDT monkyyy via Digitalmars-d wrote:
>> On Thursday, 21 September 2023 at 06:17:16 UTC, Walter Bright
>>
>> wrote:
>> >> D’s booleans, however, are unsigned integer types.
>> >
>> > Yup. Changing that would break an unknown amount of code.
>>
>> Correctness really should come first; it wont be nearly as big as a breaking change as safe by default and we all know how good an idea that is.
>
> And what would do you expect a 1-bit boolean type would buy us?

Nothing, it was a joke.

Hi, its-a me monkyyy, if you are unfamiliar with my work here is a primer https://run.dlang.io/gist/c3ff7c75fff9064072f99b6150445564?args=-unittest%20-main%20-mixin%3Dmix

My opinions on how to improve type safety are best summarized by this video: https://www.youtube.com/watch?v=HX-Cmi1MkPc


September 22

On Thursday, 21 September 2023 at 18:23:50 UTC, Paul Backus wrote:

>

On Tuesday, 19 September 2023 at 12:37:59 UTC, Quirin Schroll wrote:

>

[...]

I did some searching, and this convention goes back at least as far as 8-bit Microsoft Basic [1], though not all the way to the original 1964 version of Dartmouth Basic [2].

I think a more likely explanation is that -1 was chosen because its binary representation is the bitwise inverse of 0. This allows the language to use the same operator for both bitwise and logical "not". Given how scarce memory was at the time, space-saving tricks like this were probably hard to pass up.

[1] https://archive.org/details/c64-programmer-ref
[2] https://www.dartmouth.edu/basicfifty/basicmanual_1964.pdf

I have that book [1] in the attic 😍

September 22

On Tuesday, 19 September 2023 at 12:37:59 UTC, Quirin Schroll wrote:

>

I don’t know if what I’m going to say is trivial or interesting, but I had this thought only recently, thinking about the design of an integer type system.

In Visual Basic (and probably more Basic dialects), booleans convert to integers as False → 0 and True → −1. You read that correctly, it’s minus one. A weird choice, isn’t it? But could it come from a basic principle?

Yes. If we think of booleans as 1-bit numeric types, it boils down to the question of mere signedness. If we assume 2’s complement, a signed 1-bit type has the values −1 and 0; that’s how 2’s complement works. Of course an unsigned 1-bit type has the values 0 and 1.

I cannot answer you why you’d want a signed 1-bit type, though.

D’s booleans, however, are unsigned integer types.

On Tuesday, 19 September 2023 at 12:37:59 UTC, Quirin Schroll wrote:

>

I don’t know if what I’m going to say is trivial or interesting, but I had this thought only recently, thinking about the design of an integer type system.

In Visual Basic (and probably more Basic dialects), booleans convert to integers as False → 0 and True → −1. You read that correctly, it’s minus one. A weird choice, isn’t it? But could it come from a basic principle?

Yes. If we think of booleans as 1-bit numeric types, it boils down to the question of mere signedness. If we assume 2’s complement, a signed 1-bit type has the values −1 and 0; that’s how 2’s complement works. Of course an unsigned 1-bit type has the values 0 and 1.

I cannot answer you why you’d want a signed 1-bit type, though.

D’s booleans, however, are unsigned integer types.

Perhaps arbitrary bit-width integers could be a solution

u1
u2
u5
u18

i1
i2
i5
i18

etc..