February 02, 2014
On 2/2/14, 9:55 AM, Timon Gehr wrote:
> On 02/02/2014 05:55 PM, Andrei Alexandrescu wrote:
>>
>> I think of the following foci for the first half of 2014:
>>
>> 1. Add @nullable and provide a -nullable compiler flag to verify it. The
>> attribute is inferred locally and for white-box functions (lambdas,
>> templates), and required as annotation otherwise. References not
>> annotated with @nullable are statically enforced to never be null.
>
> - I assume that also, references annotated with @nullable are statically
> enforced to never be dereferenced?

Correct.

> - how to eg. create an array of nullable references? struct Nullable!T{
> @nullable T t; alias t this; } Nullable!T[] foo; ? I think that would
> cause problems with IFTI.

I don't know. We'd need to figure out a way.


Andrei

February 02, 2014
On 2/2/14, 9:30 AM, Namespace wrote:
> Sounds good. But why @nullable instead of C# choice of "Type?" ?

Because @nullable is already in the language.

Andrei

February 02, 2014
On 2/2/14, 11:58 AM, Walter Bright wrote:
> On 2/2/2014 8:55 AM, Andrei Alexandrescu wrote:
>> 1. Add @nullable and provide a -nullable compiler flag to verify it. The
>> attribute is inferred locally and for white-box functions (lambdas,
>> templates),
>> and required as annotation otherwise. References not annotated with
>> @nullable
>> are statically enforced to never be null.
>
> I want to clarify that @nullable would be a storage class, NOT a type
> constructor. This means it will apply to the variable itself, not its type.
>
> For example:
>
> @nullable T* p;
> auto q = p;      // q is a T*, not @nullable
>
> In a way, it would work much like ref, which is also a storage class and
> not a type constructor.

That creates problems with creating arrays of nullable objects, as Timon Gehr mentioned.

We need to think this through.

Andrei


February 02, 2014
On 2/2/14, 12:05 PM, Walter Bright wrote:
> On 2/2/2014 9:30 AM, Namespace wrote:
>> Sounds good. But why @nullable instead of C# choice of "Type?" ?
>
> Let me rephrase that as "why use a storage class rather than a type
> constructor?"
>
> An excellent question.
>
> One of the big problems with a type constructor is we've already got a
> number of them - const, immutable, shared, inout.

This post is confused, sorry. const, immutable, shared, inout are qualifiers (in addition to being type constructors). General type constructors don't have a particular combinatorial problem.

Andrei
February 02, 2014
On 2/2/14, 12:07 PM, Walter Bright wrote:
> On 2/2/2014 9:55 AM, Timon Gehr wrote:
>> - how to eg. create an array of nullable references? struct Nullable!T{
>> @nullable T t; alias t this; } Nullable!T[] foo; ? I think that would
>> cause
>> problems with IFTI.
>
> @nullable being a storage class, this couldn't be done with @nullable
> just as you cannot create an array of 'ref' types.

I don't see how this restriction would work. On the face of it it turns one cannot create an array of Objects that may or may not be null, which is a very odd restriction. Such arrays occur in a lot of places.

Andrei

February 02, 2014
On 2/2/2014 1:18 PM, Andrei Alexandrescu wrote:
> On 2/2/14, 12:05 PM, Walter Bright wrote:
>> One of the big problems with a type constructor is we've already got a
>> number of them - const, immutable, shared, inout.
>
> This post is confused, sorry. const, immutable, shared, inout are qualifiers (in
> addition to being type constructors). General type constructors don't have a
> particular combinatorial problem.

Yes, I used the wrong term.

February 02, 2014
On 2/2/2014 1:16 PM, Andrei Alexandrescu wrote:
> That creates problems with creating arrays of nullable objects, as Timon Gehr
> mentioned.
>
> We need to think this through.

Yup.

February 02, 2014
On 02/02/2014 09:05 PM, Walter Bright wrote:
> On 2/2/2014 9:30 AM, Namespace wrote:
>> Sounds good. But why @nullable instead of C# choice of "Type?" ?
>
> Let me rephrase that as "why use a storage class rather than a type
> constructor?"
>
> An excellent question.
>
> One of the big problems with a type constructor is we've already got a
> number of them - const, immutable, shared, inout. These and their
> conversions have taken quite a while for people to get comfortable with,
> and there are still some issues in the language arising from their
> interactions.
> ...

Nullable interacts with those type constructors basically in the same way as the fixed-size array type constructor [1]. Regarding implicit conversion, I think one good form to state the implicit conversion rules in, for later procedural implementation, is the following:

 A converts to B
──────────────────
 A converts to B?

  A converts to B
───────────────────
 A? converts to B?

 e converts to B?
────────────────── (flow analysis proves e non-null)
 e converts to B

Where would be potentially bad interactions?

> Adding another type constructor will double the number of cases. I worry
> that D would collapse under the strain. This applies as well to type
> constructors for "borrowed", etc.
> ...

This sounds as if there was a structural problem in DMD. (Adding the relevant implicit conversion rules would be easy given the current state of my own front end implementation effort.) How are type qualifiers represented in the AST? How is the implicit conversion checker structured?

> On the other hand, D took the risky leap into the unknown by making
> 'ref' a storage class rather than a type constructor (in C++ it's a type
> constructor). This has turned out to be a satisfying win. I suspect it
> will be for @nullable, too, though we have to work through the cases to
> find out for sure.

These cases have almost nothing in common. (e.g, 'ref' does not influence implicit conversion behaviour.)
February 03, 2014
On 2 February 2014 06:07, Adam Wilson <flyboynw@gmail.com> wrote:

> On Sat, 01 Feb 2014 02:15:42 -0800, Manu <turkeyman@gmail.com> wrote:
>
>>
>> Why is ARC any worse than GC? Why is it even a compromise at the high
>> level?
>> Major players have been moving away from GC to ARC in recent years. It's
>> still a perfectly valid method of garbage collection, and it has the
>> advantage that it's intrinsically real-time compatible.
>>
>>
> Define Major Players? Because I only know about Apple, but they've been doing ARC for almost a decade, and IIRC, like GC's, it's not universally loved there either. Microsoft is desperately trying to get people to move back to C++ but so far the community has spoken with a resounding "You can pry C#/.NET from our cold, dead, hands." Java has shown no particular interest in moving away from GC's probably because their GC is best in class. Even games are starting to bring in GC's (The Witcher 2 for example, and almost all of the mobile/casual games market, which is actually monetarily bigger than the triple-A games market.)


You didn't say why ARC is worse than GC?

I have my suspicious that the witcher guys would have been perfectly
(perhaps MORE happy) with ARC, if it were possible.
It's possible to write a GC in C++, but it's impossible to use ARC without
compiler support, which C++ does not and never will have, and as a result,
it wasn't an option to them.
You can't say they picked one over the other when one simply wasn't
available. Their use of a GC shows they appreciate the convenience of
garbage collection, but it doesn't say anything about GC vs ARC.

 I don't think realtime software is becoming an edge case by any means,
>> maybe 'extreme' realtime is, but that doesn't change the fact that the GC still causes problems for all realtime software.
>>
>>
> Yes, but my point is that there is very little real-time software written as a percentage of all software written, which, by definition, makes it an edge-case. Even vehicle control software is no longer completely real-time. [I just happen to know that because that's the industry I work in. Certain aspects are, with the rest scheduled out.] And more to the point, D has made no claim about it's suitability for RT software and I have seen little demand for it outside a very small very vocal minority that is convinced that it has the dynamic resource management panacea if everyone would just do as they say.


So answer, why is a GC so much preferable to ARC?
GC forces you into one (poorly implemented) paradigm, which is
fundamentally incompatible with many tasks.
ARC might force you to think about weak references... but is that really as
hard as you say?
It's possible an ARC solution might optionally have a GC in the background
to collect cycles (for those not interested in dealing with the cognitive
load of circular reference as you are worried). This is fine, as long as
the aggressive users can turn that off, and deal with their circular
reference issues directly. Under an approach like this there are far more
options in terms of how different users can approach the garbage collection
in their apps, and ALL USERS would still have access to phobos.
I don't see them as mutually exclusive. They are quite complimentary.

 I personally believe latency and stuttering is one of the biggest usability
>> hindrances in modern computing, and it will become a specific design focus
>> in software of the future. People are becoming less and less tolerant of
>> latency in all forms; just consider the success of iPhone compared to
>> competition, almost entirely attributable to the silky smooth UI
>> experience. It may also be a telling move that Apple switched to ARC
>> around
>> the same time, but I don't know the details.
>>
>>
> I use .NET every day, seriously not one day goes by when I haven't touched some C# code. I can happily report that you are *ahem* wrong. Even Visual Studio 2013 doesn't stutter often, and only when I am pulling in some massive third-party module that may or may not be well written. Ironically, it is VisualD, it of D code fame that most slows down VS for me. I write software in C# every day I can happily report that I have yet to have a problem with stuttering in my code that wasn't of my own devise. (If you forget to asynchronously call that web-service it WILL stutter)
>
> And that's a point. I can write software in C# that works will without having to worry about circular references or if my data prematurely falls out of scope or any other of the details that are associated with ARC. And for my not-effort, I pay an effective cost of 0. Win-win. You're demanding that to suit your needs, we make a massive philosophical and language change to D that will incur HIGHER cognitive load on programmers for something that will not measurably improve the general use case? Ahem, that's good for D how?
>

Can you tell me your experience with C# on embedded hardware with
bugger-all memory? I can; it doesn't work (although to be fair, the GC is
not the only issue there).
It's also like you seem to think D's GC experience is comparable to C#.
It's not, D has a terrible GC, and there seems to be almost no motion to
fix or improve it.

Can it EVER be anywhere near as good as C# or Java? Where are the proposals
to get it there?
And even then, what is the resource overhead? How much ambient memory
overhead/loss will I suffer as a consequence? What if my platform is only
single-core?

 I also firmly believe that if D - a native compiled language familiar to
>> virtually all low-level programmers - doesn't have any ambition to service
>> the embedded space in the future, what will? And why not?
>> The GC is the only thing inhibiting D from being a successful match in
>> that
>> context. ARC is far more appropriate, and will see it enter a lot more
>> fields.
>> What's the loss?
>>
>>
> Cognitive load. How many details does the programmer have to worry about per line of code. Ease-of-use. A GC is easier to use in practice. You can say well they should learn to use ARC because it's better (for certain definitions of better) but they won't. They'll just move on. I'd say that's a pretty big loss.
>
> And based on my conversations with Walter, I don't think that D was ever intended to make a play for the embedded space. If D can be made to work there great, but Walter, as near as I can tell, has no interest in tying the language in knots to make it happen. So that's a non-issue. And let's be honest, the requirements of that space are fairly extreme.


But D ticks all the boxes, except that one... and it's an important field that isn't covered by the rest of the landscape of emerging or trendy languages.

 I think it's also telling that newcomers constantly raise it as a massive
>> concern, or even a deal-breaker. Would they feel the same about ARC? I
>> seriously doubt it. I wonder if a poll is in order...
>> Conversely, would any of the new-comers who are pro-GC feel any less happy
>> if it were ARC instead? $100 says they probably wouldn't even know, and
>> almost certainly wouldn't care.
>>
>
> I DON'T see a majority of newcomers raising an issue with the GC, I only see it from newcomers with some pretty extreme latency requirements, primarily for the real-time crowd. The majority of newcomers aren't interested in RT work. I think you're falling prey to confirmation bias here.


I never said anything about a majority, re-read my statement. But it does
come up all the time.
Are you surprised that the RT crowd are attracted to D? It's precisely what
they've been waiting for all these years.
There's just this one single stumbling block that seems to make every
single one of them very nervous.

I attended a small graphics/systems programming event yesterday, there were about 40-50 people there. I hadn't caught up with the organisers for years, they asked me what I had been up to, I mentioned I had been working a lot with D in my spare time. They were interested, conversations went a lot like this:

[among other comments]
"Oh yeah, I heard about that... we'd love you to come and give a talk about
it some time."
(more people overheard and approached the conversation)
"I was really interested in D, but then I realised that the GC isn't really
optional like the website claims, and stopped"
(more people approach)
"I tried it once, but I don't think it's practical for the same reason I
had to give up C#"
(small crowd, ~10 interested in hearing about it)
"I love C#, if I could use it for work, I would!" (note: for my money, D
actually offers to my kind what C# teases you with, but hangs just out of
reach)
"so, can you actually use D without the GC?"
me: "Erm. Well..."

Etc.

There was really surprising interest, and I ended out drinking beers with 4
guys who were really interested to hear more about it.
Many of them HAD heard about D, but immediately wrote it off, like C#.
There is a massive potential audience here, I've been saying for a long
time, D is the language that game devs and other realtime programmers want,
but they're very conservative, and this particular issue is an absolute red
flag.


February 03, 2014
On 2/2/2014 3:40 PM, Timon Gehr wrote:
> Nullable interacts with those type constructors basically in the same way as the
> fixed-size array type constructor [1]. Regarding implicit conversion, I think
> one good form to state the implicit conversion rules in, for later procedural
> implementation, is the following:
>
>   A converts to B
> ──────────────────
>   A converts to B?
>
>    A converts to B
> ───────────────────
>   A? converts to B?
>
>   e converts to B?
> ────────────────── (flow analysis proves e non-null)
>   e converts to B
>
> Where would be potentially bad interactions?

For starters, grep through the source code for all the implicit conversions. Then think about how it fits in with match levels, function overloading, template overloading, partial ordering, ?:, covariance, contravariance, name mangling, TypeInfo, default initialization, will general functions have to be written twice (once for T, again for T?), auto function returns, and that's just off the top of my head.

It's not just writing a truth table and throwing it over the wall.