January 22, 2012
On 01/22/2012 10:09 AM, Walter Bright wrote:
> On 1/22/2012 4:40 AM, Marco Leise wrote:
>> Or is
>> this like spaces vs. tabs? 'Cause I'm also a tab user.
>
> I struggled with that for years. Not with my own code, the tabs worked
> fine. The trouble was when collaborating with other people, who insisted
> on using tab stop settings that were the evil spawn of satan. Hence,
> collaborated code was always a mess.
>
> Like newklear combat toe to toe with the roosskies, the only way to win
> is to not play.

The only way to win the whitespace war is to change the rules:

My I propose the following modifications to the D lexer:

'''
White space may consist of:
- A comment between any two tokens.
- A single space between tokens that, if adjoined would be a single token.

All other white space (including \n \r \t \v, etc.) is forbidden and a lexical error.
'''

With these additions, all valid D code will be so hard to read that nobody will ever attempt to read it without first running a re-formatter over it and once that is standard practice, everyone will see it in there own preferred style.
January 22, 2012
On 01/22/2012 01:42 AM, Mail Mantis wrote:
> 2012/1/22 Walter Bright<newshound2@digitalmars.com>:
>> http://news.ycombinator.com/item?id=3495283
>>
>> and getting rid of unsigned types is not the solution to signed/unsigned
>> issues.
>
> Would it be sane to add integer overflow/carry runtime checks in
> -debug builds? This could probably solve such issues, but we'd need
> some means to avoid this checks when nesessary.

http://embed.cs.utah.edu/ioc/
January 22, 2012
On Sunday, 22 January 2012 at 20:01:52 UTC, bcs wrote:
> On 01/22/2012 01:31 AM, Marco Leise wrote:
>> Am 22.01.2012, 08:23 Uhr, schrieb bcs <bcs@example.com>:
>>
>>> On 01/21/2012 10:05 PM, Walter Bright wrote:
>>>> http://news.ycombinator.com/item?id=3495283
>>>>
>>>> and getting rid of unsigned types is not the solution to signed/unsigned
>>>> issues.
>>>
>>> A quote from that link:
>>>
>>> "There are many use cases for data types that behave like pure bit
>>> strings with no concept of sign."
>>>
>>> Why not recast the concept of unsigned integers as "bit vectors (that
>>> happen to implement arithmetic)"? I've seen several sources claim that
>>> uint (and friends) should never be used unless you are using it for
>>> low level bit tricks and the like.
>>
>> Those are heretics.
>>
>>> Rename them bits{8,16,32,64} and make the current names aliases.
>>
>> So everyone uses int, and we get messages like: "This program currently
>> uses -1404024 bytes of RAM". I have strong feelings against using signed
>> types for variables that are ever going to only hold positive numbers,
>> especially when it comes to sizes and lengths.
>
> OK, I'll grant that there are a (*extremely* limited) number of cases where you actually need the full range of an unsigned integers type. I'm not suggesting that the actual semantics of the type be modified and it would still be usable for exactly that sort of cases. My suggestion is that the naming be modified to avoid suggesting that the *primary* use for the type is for non negative numbers.
>
> To support that position, if you really expect to encounter and thus need to correctly handle numbers between 2^31 and 2^32 (or 63/64, etc.) then you already need to be doing careful analyses to avoid bugs from overflow. At that point, you are already considering low level details and using a "bit vector" type as a number is not much more complicated. The added bonus is that the mismatch between the name and what it's used for is a big red flag saying "be careful or this is likely to cause bugs".
>
> Getting people to think of it that way is likely to prevent more bugs that it cause.

I think that we're looking in the wrong corner for the culprit. While the unsigned types could have had better names (machine related: byte, word, etc..)  IMO the real issue here is *not* with the types themselves but rather with the horrid implicit conversion rules inherited from C. mixed signed/unsigned expressions really should be compile errors and should be resolved explicitly by the programmer. foo() + bar() can be any of int/uint/long depending on what the programmer wants to achieve.

my 2 cents.

January 22, 2012
On Sunday, 22 January 2012 at 09:31:15 UTC, Marco Leise wrote:
> So everyone uses int, and we get messages like: "This program currently uses -1404024 bytes of RAM". I have strong feelings against using signed types for variables that are ever going to only hold positive numbers, especially when it comes to sizes and lengths.

If you ignore type limits, you're asking for trouble. Imagine you have 2 gigs of ram and 3 gig pagefile on 32-bit OS. What is the total size of available memory?
January 22, 2012
"Kagamin" <spam@here.lot> wrote in message news:bhhmhjvgsmlxjvsuwzsb@dfeed.kimsufi.thecybershadow.net...
> On Sunday, 22 January 2012 at 09:31:15 UTC, Marco Leise wrote:
>> So everyone uses int, and we get messages like: "This program currently uses -1404024 bytes of RAM". I have strong feelings against using signed types for variables that are ever going to only hold positive numbers, especially when it comes to sizes and lengths.
>
> If you ignore type limits, you're asking for trouble. Imagine you have 2 gigs of ram and 3 gig pagefile on 32-bit OS. What is the total size of available memory?

One negative gig, obviously. (I think that means it's positronic...)


January 22, 2012
"Walter Bright" <newshound2@digitalmars.com> wrote in message news:jfhj4v$l2b$1@digitalmars.com...
> On 1/22/2012 9:44 AM, equinox@atw.hu wrote:
>> I noticed I cannot use typedef any longer in D2.
>> Why did it go?
>
> typedef turned out to have many difficult issues about when it was a distinct type and when it wasn't.

Fortunately, you should still be able to get the same effect of typedef with a struct and alias this.


January 22, 2012
Am 22.01.2012, 21:44 Uhr, schrieb Kagamin <spam@here.lot>:

> On Sunday, 22 January 2012 at 09:31:15 UTC, Marco Leise wrote:
>> So everyone uses int, and we get messages like: "This program currently uses -1404024 bytes of RAM". I have strong feelings against using signed types for variables that are ever going to only hold positive numbers, especially when it comes to sizes and lengths.
>
> If you ignore type limits, you're asking for trouble. Imagine you have 2 gigs of ram and 3 gig pagefile on 32-bit OS. What is the total size of available memory?

I can use up to 4GB of that in the address space of my application - the value range of a uint, qed
January 22, 2012
On 2012-01-22 14:36, foobar wrote:
> On Sunday, 22 January 2012 at 20:01:52 UTC, bcs wrote:
>> On 01/22/2012 01:31 AM, Marco Leise wrote:
>>> Am 22.01.2012, 08:23 Uhr, schrieb bcs <bcs@example.com>:
>>>
>>>> On 01/21/2012 10:05 PM, Walter Bright wrote:
>>>>> http://news.ycombinator.com/item?id=3495283
>>>>>
>>>>> and getting rid of unsigned types is not the solution to
>>>>> signed/unsigned issues.
>>>>
>>>> A quote from that link:
>>>>
>>>> "There are many use cases for data types that behave like pure
>>>> bit strings with no concept of sign."
>>>>
>>>> Why not recast the concept of unsigned integers as "bit vectors
>>>> (that happen to implement arithmetic)"? I've seen several
>>>> sources claim that uint (and friends) should never be used
>>>> unless you are using it for low level bit tricks and the like.
>>>
>>> Those are heretics.
>>>
>>>> Rename them bits{8,16,32,64} and make the current names
>>>> aliases.
>>>
>>> So everyone uses int, and we get messages like: "This program
>>> currently uses -1404024 bytes of RAM". I have strong feelings
>>> against using signed types for variables that are ever going to
>>> only hold positive numbers, especially when it comes to sizes and
>>> lengths.
>>
>> OK, I'll grant that there are a (*extremely* limited) number of
>> cases where you actually need the full range of an unsigned
>> integers type. I'm not suggesting that the actual semantics of the
>> type be modified and it would still be usable for exactly that sort
>> of cases. My suggestion is that the naming be modified to avoid
>> suggesting that the *primary* use for the type is for non negative
>> numbers.
>>
>> To support that position, if you really expect to encounter and
>> thus need to correctly handle numbers between 2^31 and 2^32 (or
>> 63/64, etc.) then you already need to be doing careful analyses to
>> avoid bugs from overflow. At that point, you are already
>> considering low level details and using a "bit vector" type as a
>> number is not much more complicated. The added bonus is that the
>> mismatch between the name and what it's used for is a big red flag
>> saying "be careful or this is likely to cause bugs".
>>
>> Getting people to think of it that way is likely to prevent more
>> bugs that it cause.
>
> I think that we're looking in the wrong corner for the culprit. While
> the unsigned types could have had better names (machine related:
> byte, word, etc..) IMO the real issue here is *not* with the types
> themselves but rather with the horrid implicit conversion rules
> inherited from C. mixed signed/unsigned expressions really should be
> compile errors and should be resolved explicitly by the programmer.
> foo() + bar() can be any of int/uint/long depending on what the
> programmer wants to achieve.
>
> my 2 cents.
>

+1
January 23, 2012
On Sunday, 22 January 2012 at 22:17:10 UTC, Marco Leise wrote:
>> If you ignore type limits, you're asking for trouble. Imagine you have 2 gigs of ram and 3 gig pagefile on 32-bit OS. What is the total size of available memory?
>
> I can use up to 4GB of that in the address space of my application - the value range of a uint, qed

With PAE it's possible to access more than that. AFAIK some web-servers do it.
January 23, 2012
Wouldn't it be easier to make the typedef a Phobos solution and end
this debate once and for all?
Sure, the definition won't look as pretty as typedef did, but still...

On Mon, Jan 23, 2012 at 1:22 AM, Nick Sabalausky <a@a.a> wrote:
> "Walter Bright" <newshound2@digitalmars.com> wrote in message news:jfhj4v$l2b$1@digitalmars.com...
>> On 1/22/2012 9:44 AM, equinox@atw.hu wrote:
>>> I noticed I cannot use typedef any longer in D2.
>>> Why did it go?
>>
>> typedef turned out to have many difficult issues about when it was a distinct type and when it wasn't.
>
> Fortunately, you should still be able to get the same effect of typedef with a struct and alias this.
>
>



-- 
Bye,
Gor Gyolchanyan.