August 21, 2004
I think part of the problem here is that the compiler will not point out the distinction for you. It's pretty easy for such things to get completely lost within a sea of C structs, so it makes porting rather more tricky than it might otherwise be ... for this reason alone, I'd tend to agree with the suggestion that bool should really be an alias for ubyte.

But, let's not start that all over again :-)


"Walter" <newshound@digitalmars.com> wrote in message news:cg6j0a$17p6$1@digitaldaemon.com...
>
> "Matthew" <admin.hat@stlsoft.dot.org> wrote in message news:cg681o$11c3$1@digitaldaemon.com...
> > I have no idea. But even if it does it "correctly", i.e. the packing of
> structures corresponds to what C knows a bool to
> > be - and bear in mind that bool in C and bool in D are quite different
> beasts (esp. since in C its size is
> > implementation dependent!!) - we're still in stuck a bad dream. Now
we've
> got *different*-semantic flavours - one can be
> > addressed, the other cannot - of the *same* type *within* a single D
> source file!!
> >
> > IMO, it's all just too damn hideous to contemplate. Imagine explaining
all
> this crap to someone who'd chosen Java over
> > C++ but heard that D was C++ without all the scary bits and so wanted to
> try it out. He'd think we were morons.
> >
> > bit is a wart, and should be abandoned entirely, in favour of bit fields
> for individual elements and library-based
> > packed containers.
>
> I have no idea why this is a problem - all you have to do to match a C struct is to use the corresponding D type that matches the size of the corresponding C type. That means you'll have to look up what a 'bool' is
for
> your C implementation, and then use the D type that matches. It is no different at all than looking up the size of 'unsigned' in your C implementation and then choosing ushort, uint, or ulong in D to match.
>
>
>


August 21, 2004
"Walter" <newshound@digitalmars.com> wrote in message news:cg6j0a$17p6$1@digitaldaemon.com...
>
> "Matthew" <admin.hat@stlsoft.dot.org> wrote in message news:cg681o$11c3$1@digitaldaemon.com...
> > I have no idea. But even if it does it "correctly", i.e. the packing of
> structures corresponds to what C knows a bool to
> > be - and bear in mind that bool in C and bool in D are quite different
> beasts (esp. since in C its size is
> > implementation dependent!!) - we're still in stuck a bad dream. Now we've
> got *different*-semantic flavours - one can be
> > addressed, the other cannot - of the *same* type *within* a single D
> source file!!
> >
> > IMO, it's all just too damn hideous to contemplate. Imagine explaining all
> this crap to someone who'd chosen Java over
> > C++ but heard that D was C++ without all the scary bits and so wanted to
> try it out. He'd think we were morons.
> >
> > bit is a wart, and should be abandoned entirely, in favour of bit fields
> for individual elements and library-based
> > packed containers.
>
> I have no idea why this is a problem - all you have to do to match a C struct is to use the corresponding D type that matches the size of the corresponding C type. That means you'll have to look up what a 'bool' is for your C implementation, and then use the D type that matches. It is no different at all than looking up the size of 'unsigned' in your C implementation and then choosing ushort, uint, or ulong in D to match.

My example showed the evolution of a struct (with the consequent devolution of the hapless cross-linguist's appreciation of D), wherein the first version _would_ do as you say, but the simple act of turning the single scalar bool in the first version into an array of bool in the second version would result in complete, yet subtle (one might almost say sly), change in structure.

I can't fathom why no-one else thinks this stinks worse than the nappy that I was presented with this am.

Maybe I'm a fool, or maybe I've misunderstood that the first 9 of the following types all go into a single byte. Please correct me if I'm wrong.

1. bool        - in one byte. Correct?
2. bool[1]   - in one byte. Correct?
3. bool[2]   - in one byte. Correct?
4. bool[3]   - in one byte. Correct?
5. bool[4]   - in one byte. Correct?
6. bool[5]   - in one byte. Correct?
7. bool[6]   - in one byte. Correct?
8. bool[7]   - in one byte. Correct?
9. bool[8]   - in one byte. Correct?
10. bool[9]   - in two bytes. Correct?

If this is correct, then bit stinks, and should be, at minimum, banned from being used in structs and unions. QED.






August 21, 2004
In article <cg6jbf$186k$1@digitaldaemon.com>, antiAlias says...
>
>I think part of the problem here is that the compiler will not point out the distinction for you. It's pretty easy for such things to get completely lost within a sea of C structs, so it makes porting rather more tricky than it might otherwise be ... for this reason alone, I'd tend to agree with the suggestion that bool should really be an alias for ubyte.

Same here.  Or just toss bool altogether and let people define it as needed. But then I have a feeling we're in the minority on this issue :)


Sean


August 21, 2004
"antiAlias" <fu@bar.com> wrote in message news:cg6jbf$186k$1@digitaldaemon.com...
> I think part of the problem here is that the compiler will not point out the distinction for you. It's pretty easy for such things to get completely lost within a sea of C structs, so it makes porting rather more tricky than it might otherwise be ... for this reason alone, I'd tend to agree with the suggestion that bool should really be an alias for ubyte.
>
> But, let's not start that all over again :-)

Well, although we could go there, this actually has very little to do with most of the bool issues. (I've long since given up hope on that one, partly because I can see some of the various sides on it.)

What we're talking about now is pure unadulterated wart. Where's the debate?

Just for folks who (like me) can't be arsed to plough through the tall trees of ng debate, I'll rephrase here, pictorially:

Version 1

    struct X
    {
        bool    b;    // 1-bit, "occupying" 1-byte
        int        x;    // 32-bits
    };

  | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
    b   ..............................
   ....................................
   ....................................
   ....................................
   x-----------------------
    -----------------------
    -----------------------
    ----------------------x

When expressed in C, this will look like:

    struct X
    {
        bool    b;    // 1-bit, "occupying" 1-byte
        int        x;    // 32-bits
    };

All fine so far. (As long as C doesn't use any values other than 0 and 1 for its "bool" type.)

Now for the fan-hitting shit.

Version 2

    struct X
    {
        bool    b[2];    // 2-bits, "occupying" 1-byte
        int        x;    // 32-bits
    };

  | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
   b0 b1  .........................   // actually b[0] and b[1], but am short on space
   ....................................
   ....................................
   ....................................
   x-----------------------
    -----------------------
    -----------------------
    ----------------------x


When expressed in C, this will look like:

    struct X
    {
        ??????
        int    x;
    };

I've written ?????? since there's _no way_ to express bits as an array in C. Of course, we can use a bitfield, as in

    struct X
    {
        bool    b0  :     1;
        bool    b1  :     1;
        int        x;
    };

That form is correct and compatible, but now we're not dealing with an array. Imagine trying to write code to _use_ such a structure's bit-"array" in C. Nasty.

But while that's is hopelessly inconvenient in practice, it's perhaps not quite overwhelming.

Where we go beyond tolerance is the fact that people will reasonably perceive the equivalent structure in C exactly as it is expressed in D, as in:

    struct X
    {
        bool    b[2];    // <Emergency. Emergency. There's an emergency here!>
        int        x;    // 32-bits
    };

All looks peachy, until we look how it's layed out in memory:

  | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
   b[0] ............................
   b[1] ............................
   ....................................
   ....................................
   x-----------------------
    -----------------------
    -----------------------
    ----------------------x

Now, someone, anyone, please try and convince me that this is not a terrible wart. I'll eat my hat if you succeed.

Maybe we don't care about C compatibility. Maybe we're not interested in having D be _dramatically_ easier to interface to than .NET and Java. Maybe we're not interested in D righting the wrongs of C and C++. Maybe we're not concerned that, without *massive* corporate backing, D needs to minimise its exposure to being rightly piloried by people who want to be able to find glaring faults with the briefest exposure. If so, let me know, 'cos I've been working hard (for two years) under a grand delusion and I clearly need to wake up.

Winthrop Overthetop


> "Walter" <newshound@digitalmars.com> wrote in message news:cg6j0a$17p6$1@digitaldaemon.com...
> >
> > "Matthew" <admin.hat@stlsoft.dot.org> wrote in message news:cg681o$11c3$1@digitaldaemon.com...
> > > I have no idea. But even if it does it "correctly", i.e. the packing of
> > structures corresponds to what C knows a bool to
> > > be - and bear in mind that bool in C and bool in D are quite different
> > beasts (esp. since in C its size is
> > > implementation dependent!!) - we're still in stuck a bad dream. Now
> we've
> > got *different*-semantic flavours - one can be
> > > addressed, the other cannot - of the *same* type *within* a single D
> > source file!!
> > >
> > > IMO, it's all just too damn hideous to contemplate. Imagine explaining
> all
> > this crap to someone who'd chosen Java over
> > > C++ but heard that D was C++ without all the scary bits and so wanted to
> > try it out. He'd think we were morons.
> > >
> > > bit is a wart, and should be abandoned entirely, in favour of bit fields
> > for individual elements and library-based
> > > packed containers.
> >
> > I have no idea why this is a problem - all you have to do to match a C struct is to use the corresponding D type that matches the size of the corresponding C type. That means you'll have to look up what a 'bool' is
> for
> > your C implementation, and then use the D type that matches. It is no different at all than looking up the size of 'unsigned' in your C implementation and then choosing ushort, uint, or ulong in D to match.
> >
> >
> >
>
>


August 21, 2004
Matthew wrote:

> What we're talking about now is pure unadulterated wart. Where's the debate?
> 
> Just for folks who (like me) can't be arsed to plough through the tall trees of ng debate, I'll rephrase here,
> pictorially:
> 
> [Pictoral rephrasing]
> 
> Now, someone, anyone, please try and convince me that this is not a terrible wart. I'll eat my hat if you succeed.
> 
> Maybe we don't care about C compatibility. Maybe we're not interested in having D be _dramatically_ easier to interface
> to than .NET and Java. Maybe we're not interested in D righting the wrongs of C and C++. Maybe we're not concerned that,
> without *massive* corporate backing, D needs to minimise its exposure to being rightly piloried by people who want to be
> able to find glaring faults with the briefest exposure. If so, let me know, 'cos I've been working hard (for two years)
> under a grand delusion and I clearly need to wake up.

Yes.

It's pretty clear now that bit causes all manner of oddness.  Further, a templatized struct would be very easy to write, and could offer nearly, if not completely identical syntax.

Overloaded operators and templates were missing from the original spec, as I understand it.  From this perspective, a builtin bitset type makes perfect sense.  Now, though, keeping it around doesn't seem to offer any compelling benefit at all.

Why /not/ dump it?

 -- andy
August 21, 2004
"Andy Friesen"  wrote
> It's pretty clear now that bit causes all manner of oddness.  Further, a templatized struct would be very easy to write, and could offer nearly, if not completely identical syntax.
>
> Overloaded operators and templates were missing from the original spec, as I understand it.  From this perspective, a builtin bitset type makes perfect sense.  Now, though, keeping it around doesn't seem to offer any compelling benefit at all.
>
> Why /not/ dump it?

Same could be said for AA, also. It's not unusual for large, long-term developments to do some occasional "house-keeping". IMO, that would be a good thing to do before general release.


August 21, 2004
"antiAlias" <fu@bar.com> wrote in message news:cg6nrq$1bbr$1@digitaldaemon.com...
> "Andy Friesen"  wrote
> > It's pretty clear now that bit causes all manner of oddness.  Further, a templatized struct would be very easy to write, and could offer nearly, if not completely identical syntax.
> >
> > Overloaded operators and templates were missing from the original spec, as I understand it.  From this perspective, a builtin bitset type makes perfect sense.  Now, though, keeping it around doesn't seem to offer any compelling benefit at all.
> >
> > Why /not/ dump it?
>
> Same could be said for AA, also. It's not unusual for large, long-term developments to do some occasional "house-keeping". IMO, that would be a good thing to do before general release.

Agreed. I believe there are some serious questions re AAs.

But for the moment I'm interested to learn the degree to which my views on bit diverge from the D quorum.


August 21, 2004
In article <cg6kbc$191v$1@digitaldaemon.com>, Sean Kelly says...

>>bool should really be an alias for ubyte.
>
>Same here.  Or just toss bool altogether and let people define it as needed. But then I have a feeling we're in the minority on this issue :)

>Sean

A lot of people, myself included, would prefer that bool were a type logically distinct from either bit or any kind of int, not implicitly intercastable with those types. We would prefer that true and false were values of type bool, not aliases for 1 and 0. So yes - the alias of bool to bit is annoying. But Walter wants it that way, and we've done /so/ much arguing about it in the past.

Arcane Jill


August 21, 2004
Matthew wrote:

> 
> "Walter" <newshound@digitalmars.com> wrote in message news:cg6j0a$17p6$1@digitaldaemon.com...
>>
>> "Matthew" <admin.hat@stlsoft.dot.org> wrote in message news:cg681o$11c3$1@digitaldaemon.com...
>> > I have no idea. But even if it does it "correctly", i.e. the packing of
>> structures corresponds to what C knows a bool to
>> > be - and bear in mind that bool in C and bool in D are quite different
>> beasts (esp. since in C its size is
>> > implementation dependent!!) - we're still in stuck a bad dream. Now we've
>> got *different*-semantic flavours - one can be
>> > addressed, the other cannot - of the *same* type *within* a single D
>> source file!!
>> >
>> > IMO, it's all just too damn hideous to contemplate. Imagine explaining all
>> this crap to someone who'd chosen Java over
>> > C++ but heard that D was C++ without all the scary bits and so wanted to
>> try it out. He'd think we were morons.
>> >
>> > bit is a wart, and should be abandoned entirely, in favour of bit fields
>> for individual elements and library-based
>> > packed containers.
>>
>> I have no idea why this is a problem - all you have to do to match a C struct is to use the corresponding D type that matches the size of the corresponding C type. That means you'll have to look up what a 'bool' is for your C implementation, and then use the D type that matches. It is no different at all than looking up the size of 'unsigned' in your C implementation and then choosing ushort, uint, or ulong in D to match.
> 
> My example showed the evolution of a struct (with the consequent devolution of the hapless cross-linguist's appreciation of D), wherein the first version _would_ do as you say, but the simple act of turning the single scalar bool in the first version into an array of bool in the second version would result in complete, yet subtle (one might almost say sly), change in structure.
> 
> I can't fathom why no-one else thinks this stinks worse than the nappy that I was presented with this am.
> 
> Maybe I'm a fool, or maybe I've misunderstood that the first 9 of the following types all go into a single byte. Please correct me if I'm wrong.
> 
> 1. bool        - in one byte. Correct?
> 2. bool[1]   - in one byte. Correct?
> 3. bool[2]   - in one byte. Correct?
> 4. bool[3]   - in one byte. Correct?
> 5. bool[4]   - in one byte. Correct?
> 6. bool[5]   - in one byte. Correct?
> 7. bool[6]   - in one byte. Correct?
> 8. bool[7]   - in one byte. Correct?
> 9. bool[8]   - in one byte. Correct?
> 10. bool[9]   - in two bytes. Correct?
> 
> If this is correct, then bit stinks, and should be, at minimum, banned from being used in structs and unions. QED.

I like the fact that bit is packed when in arrays. If I want unpacked bytes then I'll declare something as an array of bytes. I don't see why using bit arrays in structs should be any different than using them elsewhere. Is it the fact that D and C++ are different that is bothersome? I don't get the problem.
August 21, 2004
"antiAlias" <fu@bar.com> wrote in message news:cg6jbf$186k$1@digitaldaemon.com...
> I think part of the problem here is that the compiler will not point out
the
> distinction for you. It's pretty easy for such things to get completely
lost
> within a sea of C structs, so it makes porting rather more tricky than it might otherwise be ... for this reason alone, I'd tend to agree with the suggestion that bool should really be an alias for ubyte.
>
> But, let's not start that all over again :-)

Please nooooo <g>

I'll admit, though, that the implementation of bit has turned out to be more problematic than I'd originally anticipated, what with things like pointers to bits inside bit arrays, etc.