August 20, 2016
On 19 August 2016 at 00:50, John Smith via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> Well there are some things I feel could be improved, a lot of the things are really just minor but what is a deal breaker for me mostly is the compilers. The GCC and Clang implementations are really far behind in terms of the version, so they are missing a lot of features. A lot of the features that I'd want to use D for.

This is a constant vicious cycle.  Sometimes I wonder of how better it would be for all if D was instead defined by a spec, not an implementation that adds small feature changes in an ad hoc manner as it sees fit.
August 20, 2016
On 20/08/16 00:51, Walter Bright wrote:
> On 8/18/2016 7:59 PM, Adam D. Ruppe wrote:
>> Alas, C insisted on making everything int all the time and D followed
>> that :(
>

Actually, Adam's suggestion on how things should work is precisely how C works (except it trails off at int).

a = b + c;

if b and c are both a byte, and a is a byte, the result is unpromoted. If a is a short, the result is promoted. I know the mechanism is completely different than what Adam was suggesting, but the end result is precisely the same.

> One would have to be *really* sure of their ground in coming up with
> allegedly better rules.
>

Would "no narrowing implicit casts" be considered such a better rule? :-)

Again, I'm not saying it's a bad rule, just that does have consequences. What I'm saying is that we are, already, changing things.

Shachar
August 20, 2016
On 8/20/2016 8:25 AM, Shachar Shemesh wrote:
> Actually, Adam's suggestion on how things should work is precisely how C works

No, it's subtly different. Which is my point that one must be very, very careful when proposing different behavior.

August 21, 2016
On 20/08/16 21:00, Walter Bright wrote:
> On 8/20/2016 8:25 AM, Shachar Shemesh wrote:
>> Actually, Adam's suggestion on how things should work is precisely how
>> C works
>
> No, it's subtly different. Which is my point that one must be very, very
> careful when proposing different behavior.
>

Can you give an example of an expression that would yield different results in both modes?


To frame the discussion in a constructive way, I'll suggest an algorithmic definition of (my interpretation of) Adam's proposal:

During static analysis, keep both the "most expanded" and the "least expanded" type of the expression parsed so far. "Least expanded" is the largest type actually used in the expression.

Upon use of the value, resolve which type to actually use for it. If the use type requests a type between least and most, use that type for evaluating the entire expression. If the use requests a type outside that range, use the one closest (and, if the use is below the range, complain about narrowing conversion).

If more than one use is possible (i.e. - overloading), use the largest one applicable.


I believe the above solves my problem, without losing compatibility with C (counter examples welcome, so we can continue the discussion in a productive way), and without foregoing erroring out on narrowing conversions.

Shachar
August 21, 2016
On Friday, 19 August 2016 at 02:59:40 UTC, Adam D. Ruppe wrote:
> On Thursday, 18 August 2016 at 22:50:27 UTC, John Smith wrote:
>> Garbage collector is in a few libraries as well. I think the only problem I had with that is that the std.range library has severely reduced functionality when using static arrays.
>
> std.range is one of the libraries that has never used the GC much. Only tiny parts of it ever have,
>
> Moreover, dynamic arrays do not necessarily have to be GC'd. Heck, you can even malloc them if you want to (`(cast(int*)malloc(int.sizeof))[0 .. 1]` gives an int[] of length 1).
>
> This has been a common misconception lately... :(


Never really said that was the case. The restriction was caused for the need to be able to change the length of the array. Which you can't do: (cast(int*)malloc(int.sizeof))[0 .. 1].length = 0. If that wasn't std.range then it was something else, it's been a while since I used it.
August 21, 2016
On 08/21/2016 07:12 AM, Shachar Shemesh wrote:
> During static analysis, keep both the "most expanded" and the "least
> expanded" type of the expression parsed so far. "Least expanded" is the
> largest type actually used in the expression.

What's "most expanded"?

> Upon use of the value, resolve which type to actually use for it. If the
> use type requests a type between least and most, use that type for
> evaluating the entire expression. If the use requests a type outside
> that range, use the one closest (and, if the use is below the range,
> complain about narrowing conversion).

So when only ubytes are involved, all calculations would be done on ubytes, no promotions, right? There are cases where that would give different results than doing promotions.

Consider `ubyte(255) * ubyte(2) / ubyte(2)`. If the operands are promoted to a larger type, you get 255 as the result. If they are not, you have the equivalent of `ubyte x = 255; x *= 2; x /= 2;` which gives you 127.
August 21, 2016
On 08/20/2016 11:25 AM, Shachar Shemesh wrote:
> On 20/08/16 00:51, Walter Bright wrote:
>> On 8/18/2016 7:59 PM, Adam D. Ruppe wrote:
>>> Alas, C insisted on making everything int all the time and D followed
>>> that :(
>>
>
> Actually, Adam's suggestion on how things should work is precisely how C
> works (except it trails off at int).
>
> a = b + c;
>
> if b and c are both a byte, and a is a byte, the result is unpromoted.
> If a is a short, the result is promoted. I know the mechanism is
> completely different than what Adam was suggesting, but the end result
> is precisely the same.

Consider:

void fun(byte);
void fun(int);
fun(b + c);

Under the new rule, this code (and much more) would silently change behavior. How would that get fixed?

>> One would have to be *really* sure of their ground in coming up with
>> allegedly better rules.
>>
>
> Would "no narrowing implicit casts" be considered such a better rule? :-)
>
> Again, I'm not saying it's a bad rule, just that does have consequences.
> What I'm saying is that we are, already, changing things.

Again: choose your fights/points and fight/make them well. This is not one worth having.


Andrei

August 21, 2016
On 08/21/2016 01:12 AM, Shachar Shemesh wrote:
> I'll suggest an algorithmic definition of (my interpretation of) Adam's
> proposal:
>
> During static analysis, keep both the "most expanded" and the "least
> expanded" type of the expression parsed so far. "Least expanded" is the
> largest type actually used in the expression.
>
> Upon use of the value, resolve which type to actually use for it. If the
> use type requests a type between least and most, use that type for
> evaluating the entire expression. If the use requests a type outside
> that range, use the one closest (and, if the use is below the range,
> complain about narrowing conversion).
>
> If more than one use is possible (i.e. - overloading), use the largest
> one applicable.

How is this different than VRP? -- Andrei
August 21, 2016
On 8/21/2016 2:47 AM, ag0aep6g wrote:
>> Upon use of the value, resolve which type to actually use for it. If the
>> use type requests a type between least and most, use that type for
>> evaluating the entire expression. If the use requests a type outside
>> that range, use the one closest (and, if the use is below the range,
>> complain about narrowing conversion).
>
> So when only ubytes are involved, all calculations would be done on ubytes, no
> promotions, right? There are cases where that would give different results than
> doing promotions.
>
> Consider `ubyte(255) * ubyte(2) / ubyte(2)`. If the operands are promoted to a
> larger type, you get 255 as the result. If they are not, you have the equivalent
> of `ubyte x = 255; x *= 2; x /= 2;` which gives you 127.

That's right.

The thing is, programmers are so used to C integral promotion rules they often are completely unaware of them and how they work, despite relying on them. This is what makes changing the rules so pernicious and dangerous.

I've had to explain the promotion rules to professionals with 10 years of experience in C/C++, and finally stopped being surprised at that.

D does change the rules, but only in a way that adds compile time errors to them. So no surprises.

(Nobody knows how function overloading works in C++ either, but that's forgivable :-) )
August 21, 2016
On Sunday, 21 August 2016 at 17:26:24 UTC, Walter Bright wrote:
> On 8/21/2016 2:47 AM, ag0aep6g wrote:
>>> Upon use of the value, resolve which type to actually use for it. If the
>>> use type requests a type between least and most, use that type for
>>> evaluating the entire expression. If the use requests a type outside
>>> that range, use the one closest (and, if the use is below the range,
>>> complain about narrowing conversion).
>>
>> So when only ubytes are involved, all calculations would be done on ubytes, no
>> promotions, right? There are cases where that would give different results than
>> doing promotions.
>>
>> Consider `ubyte(255) * ubyte(2) / ubyte(2)`. If the operands are promoted to a
>> larger type, you get 255 as the result. If they are not, you have the equivalent
>> of `ubyte x = 255; x *= 2; x /= 2;` which gives you 127.
>
> That's right.
>
> The thing is, programmers are so used to C integral promotion rules they often are completely unaware of them and how they work, despite relying on them. This is what makes changing the rules so pernicious and dangerous.
>
> I've had to explain the promotion rules to professionals with 10 years of experience in C/C++, and finally stopped being surprised at that.
>
> D does change the rules, but only in a way that adds compile time errors to them. So no surprises.
>
> (Nobody knows how function overloading works in C++ either, but that's forgivable :-) )

I have a speculative answer to the topic of this thread, which I was wanting to write a comment here about, before I saw the last two comments in this thread.

Look in the D Language Reference, the spec. Look for the word "error" there. It occurs many times, in code that is erroneous in the spec.

Also, notice the number of question marks in the code sample near the end of the page on Declarations. Notice that the questions raised are not answered there.

The D spec is full of errors, literally, or teasers sometimes. It seems improbable to me that D will become popular unless that is first fixed, and the attitude that it represents. It's a sign of unsoundness, at least unsoundness about how to teach a language or how to standardize it, that comes through clearly at a surface level, even if someone doesn't have specific disagreements with the design of D.

Erroneous code should be omitted from a spec, or at least clearly marked such as by a red background.

Mixing lines commented as errors together with later lines commented as "ok" in the same code quote blocks makes it even more confusing. (Someone who doesn't know any D yet might even think that D gives you the results of whatever lines are correct, skipping the errors as if they were empty lines. Programming languages differ enough that that's a possible feature, the sort of feature that might result in a language being described as "exciting" even.)

Anyone feel free to correct me or say I'm all wrong about this. I'm not an expert on teaching programming languages. I'm just a learner, and just as a hobby.