August 01, 2012
On Tuesday, 31 July 2012 at 20:41:55 UTC, Dmitry Olshansky wrote:
> On 31-Jul-12 22:21, Era Scarecrow wrote:
>> Well curiously it was easier to fix than I thought (a line for a static if, and a modification of the masking)... Was there any other bugs that come to mind? Anything of consequence?
>
> Great to see things moving. Could you please do a separate pull for bitfields it should get merged easier and it seems like a small but important bugfix.

 Guess this means I'll be working on BitArrays a bit later and work instead on the bitfields code. What fun... :) I thought I had bitfields as separate already, but it's kinda thrown both sets of changes in. Once I figure it out I'll get them separated and finish work on the bitfields.
August 01, 2012
On Tuesday, 31 July 2012 at 20:41:55 UTC, Dmitry Olshansky wrote:

> Great to see things moving. Could you please do a separate pull for bitfields it should get merged easier and it seems like a small but important bugfix.

https://github.com/rtcvb32/phobos/commit/620ba57cc0a860245a2bf03f7b7f5d6a1bb58312

 I've updated the next update in my bitfields branch. All unittests pass for me.
August 02, 2012
On Wednesday, 1 August 2012 at 07:24:09 UTC, Era Scarecrow wrote:
> On Tuesday, 31 July 2012 at 20:41:55 UTC, Dmitry Olshansky wrote:
>
>> Great to see things moving. Could you please do a separate pull for bitfields it should get merged easier and it seems like a small but important bugfix.
>
> https://github.com/rtcvb32/phobos/commit/620ba57cc0a860245a2bf03f7b7f5d6a1bb58312
>
>  I've updated the next update in my bitfields branch. All unittests pass for me.

I had an (implementation) question for you:
Does the implementation actually require knowing what the size of the padding is?

eg:
struct A
{
    int a;
    mixin(bitfields!(
        uint,  "x",    2,
        int,   "y",    3,
        ulong,  "",    3 // <- This line right there
    ));
}

It that highlighted line really mandatory?
I'm fine with having it optional, in case I'd want to have, say, a 59 bit padding, but can't the implementation figure it out on it's own?
August 02, 2012
On Thursday, 2 August 2012 at 09:03:54 UTC, monarch_dodra wrote:
> I had an (implementation) question for you: Does the implementation actually require knowing what the size of the padding is?
>
> eg:
> struct A
> {
>     int a;
>     mixin(bitfields!(
>         uint,  "x",    2,
>         int,   "y",    3,
>         ulong,  "",    3 // <- This line right there
>     ));
> }
>
> It that highlighted line really mandatory?
> I'm fine with having it optional, in case I'd want to have, say, a 59 bit padding, but can't the implementation figure it out on it's own?

 The original code has it set that way, why? Perhaps so you are aware and actually have in place where all the bits are assigned (even if you aren't using them); Be horrible if you used accidently 33 bits and it extended to 64 without telling you (Wouldn't it?).

 However, having it fill the size in and ignore the last x bits wouldn't be too hard to do, I've been wondering if I should remove it.
August 02, 2012
On Thursday, 2 August 2012 at 09:14:15 UTC, Era Scarecrow wrote:
> On Thursday, 2 August 2012 at 09:03:54 UTC, monarch_dodra wrote:
>> I had an (implementation) question for you: Does the implementation actually require knowing what the size of the padding is?
>>
>> eg:
>> struct A
>> {
>>    int a;
>>    mixin(bitfields!(
>>        uint,  "x",    2,
>>        int,   "y",    3,
>>        ulong,  "",    3 // <- This line right there
>>    ));
>> }
>>
>> It that highlighted line really mandatory?
>> I'm fine with having it optional, in case I'd want to have, say, a 59 bit padding, but can't the implementation figure it out on it's own?
>
>  The original code has it set that way, why? Perhaps so you are aware and actually have in place where all the bits are assigned (even if you aren't using them); Be horrible if you used accidently 33 bits and it extended to 64 without telling you (Wouldn't it?).
>
>  However, having it fill the size in and ignore the last x bits wouldn't be too hard to do, I've been wondering if I should remove it.

Well, I was just trying to figure out the rationale: The most obvious one for me being "it is much easier on the implementation".

One of the *big* reasons I'm against having a hand chosen padding, is that the implementation *should* be able to find out what the most efficient padding is on the current machine (could be 32 on some, could be 64 on some)

That said, something that could fix the above "problem" could be:
*Bitfields are automatically padded if the final field is not a "padding field".
**Padding size is implementation chosen.
*If the final field is a "padding field", then the total size must be 8/16/32/64.

EG:
//Case 1
bitfields!(
    bool, "x",    1,
    uint,  "",    3, //Interfield padding
    bool, "y",    1
)
//Fine, implementation chosen bitfield size

//Case 2
bitfields!(
    bool, "x",    1,
    uint,  "",    3, //Interfield padding
    bool, "y",    1
    ulong, "",   59, //Pad to 64
)
//Fine, imposed 64 bit

//Case 3
bitfields!(
    bool, "x",    1,
    uint,  "",    3, //Interfield padding
    bool, "y",    1
    ulong, "",   32, //Pad to 37
)
//ERROR: Padding requests the bitfield to be 37 bits longs

But I'd say that's another development anyways, if we ever decide to go this way.
August 02, 2012
On Thursday, 2 August 2012 at 09:26:04 UTC, monarch_dodra wrote:
> Well, I was just trying to figure out the rationale: The most obvious one for me being "it is much easier on the implementation".

 Since the template is recursive and at the end after bit counting would know how much it needed, that doesn't seem right; far more likely just to be explicit.

> One of the *big* reasons I'm against having a hand chosen padding, is that the implementation *should* be able to find out what the most efficient padding is on the current machine (could be 32 on some, could be 64 on some)

 If your using bitfields, then you are going for space, and to be as small as reasonably possible. Especially important for packets of information like headers for compression, and making it compatible with C/C++'s bitpacking.

> That said, something that could fix the above "problem" could be:
> *Bitfields are automatically padded if the final field is not a "padding field".

 Workable

> **Padding size is implementation chosen.

 I assume you mean by word size (size_t), meaning always 32/64bit. In that case many applicable cases would go away and be useless.

> *If the final field is a "padding field", then the total size must be 8/16/32/64.
>
> EG:
> //Case 1
> bitfields!(
>     bool, "x",    1,
>     uint,  "",    3, //Interfield padding
>     bool, "y",    1
> )
> //Fine, implementation chosen bitfield size

  It would go with a ubyte as is likely obvious, although any of the types would work.

> //Case 2
> bitfields!(
>     bool, "x",    1,
>     uint,  "",    3, //Interfield padding
>     bool, "y",    1
>     ulong, "",   59, //Pad to 64
> )
> //Fine, imposed 64 bit
>
> //Case 3
> bitfields!(
>     bool, "x",    1,
>     uint,  "",    3, //Interfield padding
>     bool, "y",    1
>     ulong, "",   32, //Pad to 37
> )
> //ERROR: Padding requests the bitfield to be 37 bits longs
>
> But I'd say that's another development anyways, if we ever decide to go this way.

 In the end, either explicit in size, or let it round up to the size it can accept. If you let it decide, then padding would be treated as though it's a variable (but isn't), so...

bitfields!(
    bool, "x",    1,
    uint,  "",    3, //Interfield padding
    bool, "y",    1
    ulong, "",   5,
)

 the total size is 9bits, the padding forces it to 16bit afterwards. In cases like this it could easily be abused or leave it in a confusing state; So likely the padding would have to be missing (and assumed) or explicit in size. So

previous one would err, while:

bitfields!(
    bool, "x",    1,
    uint,  "",    3, //Interfield padding
    bool, "y",    1,
//  void, "", 4      //implied by missing, if padded and final size not ^2 would statically assert.
)
August 02, 2012
On 8/2/12 5:14 AM, Era Scarecrow wrote:
> On Thursday, 2 August 2012 at 09:03:54 UTC, monarch_dodra wrote:
>> I had an (implementation) question for you: Does the implementation
>> actually require knowing what the size of the padding is?
>>
>> eg:
>> struct A
>> {
>> int a;
>> mixin(bitfields!(
>> uint, "x", 2,
>> int, "y", 3,
>> ulong, "", 3 // <- This line right there
>> ));
>> }
>>
>> It that highlighted line really mandatory?
>> I'm fine with having it optional, in case I'd want to have, say, a 59
>> bit padding, but can't the implementation figure it out on it's own?
>
> The original code has it set that way, why? Perhaps so you are aware and
> actually have in place where all the bits are assigned (even if you
> aren't using them); Be horrible if you used accidently 33 bits and it
> extended to 64 without telling you (Wouldn't it?).

Yes, that's the intent. The user must define exactly how an entire ubyte/ushort/uint/ulong is filled, otherwise ambiguities and bugs are soon to arrive.

> However, having it fill the size in and ignore the last x bits wouldn't
> be too hard to do, I've been wondering if I should remove it.

Please don't. The effort on the programmer side is virtually nil, and keeps things in check. In no case would the use of bitfields() be so intensive that the bloat of one line gets any significance.


Andrei
August 02, 2012
On 8/2/12 5:26 AM, monarch_dodra wrote:
> One of the *big* reasons I'm against having a hand chosen padding, is
> that the implementation *should* be able to find out what the most
> efficient padding is on the current machine (could be 32 on some, could
> be 64 on some)

In my neck of the woods they call that "non-portability".

If your code is dependent on the machine's characteristics you use version() and whatnot.


Andrei
August 02, 2012
On Thursday, 2 August 2012 at 12:35:20 UTC, Andrei Alexandrescu wrote:

> Please don't. The effort on the programmer side is virtually nil, and keeps things in check. In no case would the use of bitfields() be so intensive that the bloat of one line gets any significance.>

 If you're using a template or something to fill in the sizes, then having to calculate the remainder could be an annoyance; but those cases would be small in number.

 I'll agree, and it's best leaving it as it is.

 BTW, Wasn't there a new/reserved type of cent/ucent (128bit)?
August 02, 2012
On Thursday, 2 August 2012 at 12:38:10 UTC, Andrei Alexandrescu wrote:
> On 8/2/12 5:26 AM, monarch_dodra wrote:
>> One of the *big* reasons I'm against having a hand chosen padding, is
>> that the implementation *should* be able to find out what the most
>> efficient padding is on the current machine (could be 32 on some, could
>> be 64 on some)
>
> In my neck of the woods they call that "non-portability".
>
> If your code is dependent on the machine's characteristics you use version() and whatnot.

Well, isn't that the entire point: Making your code NOT dependent on the machine's characteristics?

By forcing the developer to chose the bitfield size (32 or 64), you ARE forcing him to make a choice dependent on the machine's characteristics. The developer just knows how he wants to pack his bits, not how he wants to pad them. Why should the developer be burdened with figuring out what the optimal size of his bitfield should be?

By leaving the field blank, *that* guarantees portability.