August 19, 2016
On Thursday, 18 August 2016 at 22:50:27 UTC, John Smith wrote:
> Garbage collector is in a few libraries as well. I think the only problem I had with that is that the std.range library has severely reduced functionality when using static arrays.

std.range is one of the libraries that has never used the GC much. Only tiny parts of it ever have,

Moreover, dynamic arrays do not necessarily have to be GC'd. Heck, you can even malloc them if you want to (`(cast(int*)malloc(int.sizeof))[0 .. 1]` gives an int[] of length 1).

This has been a common misconception lately... :(


> Well you could say the same for the same for int. Why isn't "int + int = long"?

Yeah, if I was designing things, I'd make it promote the smaller size to the larger size in the expression. So byte + byte = byte and byte + short goes to short, and so on. Moreover, I'd make it check both sides of the equation, so int = byte + byte would convert the bytes to ints first too. It'd be quite nice, and I think it would work in most cases... it might even work the same as C in most cases, though I'm not sure.

(the one thing people might find weird is auto a = b + c; means a is a byte if b and c are bytes. In C, it would be an int. but who writes that in C anyway.)


Alas, C insisted on making everything int all the time and D followed that :(

August 19, 2016
Am Thu, 18 Aug 2016 22:50:27 +0000
schrieb John Smith <gyroheli@gmail.com>:

> Well you could say the same for the same for int. Why isn't "int + int = long"? Right now it is following the rule "int + int = int".

I believe in C, int reflects the native machine word that the CPU always uses to perform arithmetics. Together with undefined overflow behavior (due to allowing different HW implementations) of signed types I suppose it was the best bet to at least widen smaller types to that type.

Even on today's amd64 CPUs int/uint remain the most efficient integral type for multiplication and division.

If we hypothetically switched to a ubyte+ubyte=ubyte semantic, then code like this breaks silently:

ubyte a = 210;
ubyte b = 244;
float f = 1.1 * (a + b);

Or otherwise instead of casting uints to ubytes you now start casting ubytes to uints all over the place. What we have is the guarantee that the result will reliably be at least 32-bit in D.

-- 
Marco

August 19, 2016
Am Mon, 15 Aug 2016 10:54:11 -0700
schrieb Ali Çehreli <acehreli@yahoo.com>:

> On 08/14/2016 07:07 AM, Andrei Alexandrescu wrote:
>  > On 08/14/2016 01:18 AM, Shachar Shemesh wrote:
> 
>  >> Also, part of our
>  >> problems with this is that introspection also does not see refs, which
>  >> causes the following two functions to report the same signature:
>  >>
>  >> void func1(int arg);
>  >> void func2(ref int arg);
>  >
>  > I actually think you can do introspection on any argument to figure
>  > whether it has "ref".
> 
> Yes, it exists. However, I tried std.traits.Parameters but that wasn't it. A colleague reminded us that it's std.traits.ParameterStorageClassTuple:
> 
>    http://dlang.org/phobos/std_traits.html#ParameterStorageClassTuple
> 
> Ali
> 

In fact, std.traits.Parameters _retains_ ref as long as you don't unpack the tuple:

void foo(ref int);
void bar(Parameters!foo args);

'bar' is now "void function(ref int)"

You can then access 'args.length' and 'args[n]' to get at each argument or pass 'args' on as is to the next function.

You can also create a ref argument for 'T' out of thin air by writing:

alias RefArg = Parameters!((ref T) {});

Just remember that 'RefArg' is again a tuple and you
always need to access args[0] to use it.

-- 
Marco

August 19, 2016
On Friday, 19 August 2016 at 08:31:39 UTC, Marco Leise wrote:
> If we hypothetically switched to a ubyte+ubyte=ubyte semantic, then code like this breaks silently:

However, if it took the entire statement into account, it could handle that... by my rule, it would see there's a float in there and thus automatically cast a and b to float before doing anything.

The compiler would parse

Assign(float, Multiply(float, Parenthetical(Add(a, b))))

In a semantic transformation step, it would see that since the lhs is float, the rhs is now casted to float, then it does that recursively through it all and a and b get promoted to float so the bits are never lost.

I really do think that would work.


Alas, I don't see D ever changing anyway, this is too deep in its C bones.
August 19, 2016
Am Fri, 19 Aug 2016 13:36:05 +0000
schrieb Adam D. Ruppe <destructionator@gmail.com>:

> On Friday, 19 August 2016 at 08:31:39 UTC, Marco Leise wrote:
> > If we hypothetically switched to a ubyte+ubyte=ubyte semantic, then code like this breaks silently:
> 
> However, if it took the entire statement into account, it could handle that... by my rule, it would see there's a float in there and thus automatically cast a and b to float before doing anything.

Float math is slow compared to integer math and precision loss
occurs when this rule is also applied to (u)int and (u)long
with only the deprecated (as per amd64 spec) real type being
large enough for 64-bit integers.

-- 
Marco

August 19, 2016
On Friday, 19 August 2016 at 15:01:58 UTC, Marco Leise wrote:
> Float math is slow compared to integer math and precision loss
> occurs when this rule is also applied to (u)int and (u)long
> with only the deprecated (as per amd64 spec) real type being
> large enough for 64-bit integers.

You're working with float anyway, so I believe the price is paid even by today's C rules.

The intermediates might be different though, I'm not sure.
August 19, 2016
On 8/19/2016 8:19 AM, Adam D. Ruppe wrote:
> You're working with float anyway, so I believe the price is paid even by today's
> C rules.

Nope, the operands of integral sub-expressions are not promoted to float.

> The intermediates might be different though, I'm not sure.

August 19, 2016
On 8/18/2016 7:59 PM, Adam D. Ruppe wrote:
> Alas, C insisted on making everything int all the time and D followed that :(

One would have to be *really* sure of their ground in coming up with allegedly better rules.

August 20, 2016
On Monday, 1 August 2016 at 15:31:35 UTC, Emre Temelkuran wrote:
> I always ignored D, i prejudiced that D failed, because nobody were talking about it. I decided to check it yesterday, it has excellent documentation, i almost covered all aspects. I think D is much better than the most of the other popular langs. It's clear as JScript, Swift, Julia and PHP, also it's capable enough as C,C++. I think D deserves a bigger community.
>
> Why people need NodeJS, Typescript etc, when there is already better looking lang?
> Everyone talking about how ugly is Golang. So why people are going on it? Performance concerns? Why languages that are not backed up by huge companies are looking like they failed?

    I lurk this forum every so often, since the time when there was this funny guy who ranted like a rude drunkard.
    At work I developed in dBase, Turbo Pascal, Delphi, Visual Basic, ASP, Java, C# and PHP, roughly in that temporal order, but most of my few hobby project and freelance work have been in Delphi, then Lazarus.
    D seems to have enhanced a lot, and I have even downloaded dlangide source once to try compiling and running it. I ran into some dub dependency problem and forgot about it.
    I guess, bundling the tools, ide and a couple good demos into an easily downloadable installation package (or package repo as in Debian) could be of some help. See for example:

https://sourceforge.net/projects/lazarus/files/

http://www.oracle.com/technetwork/java/javase/downloads/jdk-netbeans-jsp-142931.html?ssSourceSiteId=otnes

Hope this helps,

Daniel

August 20, 2016
On Saturday, 20 August 2016 at 00:19:30 UTC, Daniel wrote:
> On Monday, 1 August 2016 at 15:31:35 UTC, Emre Temelkuran wrote:
> Hope this helps,
>
> Daniel

    BTW, I'm not the guy in that Gravatar image... ;-)

Daniel