November 21, 2014
 Druntime's checkint.d  should be modify:

 uint subu(uint x, uint y, ref bool overflow)
{
    if (x < y)
      return y - x;
     else
      return x - y;
}

 uint subu(ulong x, ulong y, ref bool overflow)
{
    if (x < y)
      return y - x;
     else
      return x - y;
}


Frank
November 21, 2014
> D already has them:
>
> https://github.com/D-Programming-Language/druntime/blob/master/src/core/checkedint.d

Druntime's checkint.d  should be modify:

 uint subu(uint x, uint y, ref bool overflow)
{
    if (x < y)
      return y - x;
     else
      return x - y;
}

 ulong subu(ulong x, ulong y, ref bool overflow)
{
    if (x < y)
      return y - x;
     else
      return x - y;
}


Frank
November 21, 2014
On Friday, 21 November 2014 at 09:43:04 UTC, Kagamin wrote:

> C# doesn't encourage usage of unsigned types and warns that they are not CLS-compliant. You're going against established practices there. And signed types for numbers works wonders in C# without any notable problem and makes reasoning about code easier as you don't have to manually check for unsigned conversion bugs everywhere.
>

I don't want to be CLS compliant! I make very heavy use of unsafe code, stackalloc and interop to worry about CLS compliance. Actually one of the major reasons I am looking at D for production code is so that I don't have to mix and match Assembly, C/C++ with C#. I want the best of all worlds in one language/runtime :).

Anyways, I believe the discussion is about using unsigned for array lengths, not unsigned in general. At this point most people seem to express an opinion - including me, and I certainly hope D stays as it is when it comes to length of an array. I am not convinced in the slightest that signed is the way to go.

November 21, 2014
On Thu, 20 Nov 2014 15:40:39 +0000
Araq via Digitalmars-d <digitalmars-d@puremagic.com> wrote:

> Here are some more "opinions": http://critical.eschertech.com/2010/04/07/danger-unsigned-types-used-here/
trying to illustrate something with obviously wrong code is very funny. the whole article then reduces to "hey, i'm writing bad code, and i can teach you to do the same!"

won't buy it.


November 21, 2014
On Fri, 21 Nov 2014 08:10:55 +0000
bearophile via Digitalmars-d <digitalmars-d@puremagic.com> wrote:

> > BTW, granted the 0x7FFFFFFF problems exhibit the bugs less often, but paradoxically this can make the bug worse, because then it only gets found much, much later in supposedly tested & robust code.
> 
> Is this true? Do you have some examples of buggy code?
any code which does something like `if (a-b < 0)` is broken. it will work in most cases, but it is broken. you MUST to check values before subtracting. and if you must to do checks anyway, what is the reason of making length signed?


November 21, 2014
On Fri, 21 Nov 2014 09:23:01 +0000
Kagamin via Digitalmars-d <digitalmars-d@puremagic.com> wrote:

> C/C++ programmers disagree: http://critical.eschertech.com/2010/04/07/danger-unsigned-types-used-here/ Why do you think they can't handle signed integers?
being C programmer i disagree that author of the article is C programmer.


November 21, 2014
On Thu, 20 Nov 2014 13:28:37 -0800
Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:

> On 11/20/2014 7:52 AM, H. S. Teoh via Digitalmars-d wrote:
> > What *could* be improved, is the prevention of obvious mistakes in *mixing* signed and unsigned types. Right now, D allows code like the following with no warning:
> >
> > 	uint x;
> > 	int y;
> > 	auto z = x - y;
> >
> > BTW, this one is the same in essence as an actual bug that I fixed in druntime earlier this year, so downplaying it as a mistake people make 'cos they confound computer math with math math is fallacious.
> 
> What about:
> 
>      uint x;
>      auto z = x - 1;
> 
> ?
> 
here z must be `long`. and for `ulong` compiler must emit error.


November 21, 2014
On 11/21/14, 5:45 AM, Walter Bright wrote:
> On 11/21/2014 12:10 AM, bearophile wrote:
>> Walter Bright:
>>
>>> All you're doing is trading 0 crossing for 0x7FFFFFFF crossing
>>> issues, and
>>> pretending the problems have gone away.
>>
>> I'm not pretending anything. I am asking in practical programming what
>> of the
>> two solutions leads to leas problems/bugs. So far I've seen the unsigned
>> solution and I've seen it's highly bug-prone.
>
> I'm suggesting that having a bug and detecting the bug are two different
> things. The 0-crossing bug is easier to detect, but that doesn't mean
> that shifting the problem to 0x7FFFFFFF crossing bugs is making the bug
> count less.
>
>
>>> BTW, granted the 0x7FFFFFFF problems exhibit the bugs less often, but
>>> paradoxically this can make the bug worse, because then it only gets
>>> found
>>> much, much later in supposedly tested & robust code.
>>
>> Is this true? Do you have some examples of buggy code?
>
> http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html
>

"This bug can manifest itself for arrays whose length (in elements) is 2^30 or greater (roughly a billion elements)"

How often does that happen in practice?
November 21, 2014
On Fri, 21 Nov 2014 19:31:23 +1100
Daniel Murphy via Digitalmars-d <digitalmars-d@puremagic.com> wrote:

> "bearophile"  wrote in message news:lkcltlokangpzzdzzfjg@forum.dlang.org...
> 
> > From my experience in coding in D they are far more unlikely than sign-related bugs of array lengths.
> 
> Here's a simple program to calculate the relative size of two files, that will not work correctly with unsigned lengths.
> 
> module sizediff
> 
> import std.file;
> import std.stdio;
> 
> void main(string[] args)
> {
>     assert(args.length == 3, "Usage: sizediff file1 file2");
>     auto l1 = args[1].read().length;
>     auto l2 = args[2].read().length;
>     writeln("Difference: ", l1 - l2);
> }
> 
> The two ways this can fail (that I want to highlight) are:
> 1. If either file is too large to fit in a size_t the result will (probably)
> be wrong
> 2. If file2 is bigger than file1 the result will be wrong
> 
> If length was signed, problem 2 would not exist, and problem 1 would be more likely to occur.  I think it's clear that signed lengths would work for more possible realistic inputs.
no, the problem 2 just becomes hidden. while the given code works most of the time, it is still broken.


November 21, 2014
> no, the problem 2 just becomes hidden. while the given code works most
> of the time, it is still broken.

You cannot handle stack overflow in C reliably or out of memory
conditions so "fails in extreme edge cases" is true for every
piece of software.

"broken" is not a black-white thing. "Works most of the time"
surely is much more useful than "doesn't work". Otherwise you
would throw away your phone the first time you get a busy signal.