June 07, 2016
On 6/7/2016 1:13 PM, Brad Roberts via Digitalmars-d wrote:
> Yes, it's hard to implement.  Shrug, you signed up for it.

I work on things pretty much on a maximizing benefit/cost basis. Working on something that has a clear tremendous cost and a benefit that it might close a hole that nobody has run into after many years of use, puts it near the bottom of the list of productive things to do.

This is how all engineering projects work.

However, that should not dissuade anyone who believes it is worth their own effort to work on it, as everyone has their own benefit/cost function.

BTW, it is a nice idea to require mathematical proofs of code properties, but real world programming languages have turned out to be remarkably resistant to construction of such proofs. As I recall, Java had initially proven that Java was memory safe, until someone found a hole in it. And so on and so forth for every malware attack vector people find. We plug the problems as we find them.
June 07, 2016
On 6/7/2016 12:01 PM, Adam D. Ruppe wrote:
> the key point: the default behavior means apathy ruins the feature. Use a library
> written by someone who didn't care to write @nogc, but nevertheless didn't
> actually use the gc? too bad.

I don't think it is that bad. Lots of formerly acceptable C coding practice is still perfectly allowed by the C Standard, but has become laughed at in practice, such as:

  #define BEGIN {
  #define END }

The point being that a culture of "best practices" does arise and evolve over time, and professional programmers know it. Such has also arisen for D over things like unittests and Ddoc comments. Would you even want to use a library not written by professionals who have a reasonable awareness of best practices? Would you want to use a library where the maintainers refuse to use @nogc even if they aren't using the gc?
June 07, 2016
On Tuesday, 7 June 2016 at 08:57:33 UTC, Russel Winder wrote:
> On Mon, 2016-06-06 at 19:57 +0000, poliklosio via Digitalmars-d wrote:
>> […]
>> 
>> I should have been more specific here. I mean I want to elliminate GC in my code. I don't mind if you or anyone else uses GC. Even I use GC languages when writing things like scripts, so I'm not a no-GC-because-I-say-so person.
>
> Indeed. D has a GC by default which can be switched off. This is good. That D needs a better GC is an issue.

For me a much bigger issues are:
- Smaller power of the language with GC switched off.
- Mixing GC and noGC is still experimental thing that few experts know how to do properly.

Better GC is not a bigger issue for me as I'm not going to use much of it.
Better GC is of course advantageous for adoption, I just have a strong impression that there are more important things, like nailing easy setup of editors, and providing a guide for programming **without** the GC and for mixing GC and NoGC.

You have to distinguish switching GC off (that implies 2 languages, 2 communities, 2 separate standard libraries, all with some overlap) from being able to mix GC and non-GC code in the same program. The problem is that AFAIK the second is not a viable methodology outside a tightly controlled environment, where you select used libraries very carefully and limit their number.

>> (...)
> My feeling is there is no point in over-thinking this, or being abstract. C++ can have a GC but doesn't. Rust can have a GC but doesn't. Python has a GC. Go has a GC. Java has a GC. D has a GC that you can turn off. That seems like a USP to me. Whether this is good or bad for traction is down to the marketing and the domain of use.

I'm trying to be as far from abstract as I can. Having GC is hardly a unique selling point. As for switching it off, see issues above. After they are solved to a point when experts can get noGC stuff done easily, it will be a USP.

>> D's power is in its native-but-productive approach. This is an improvement in C++ niches, not a simplified language for banging end-user apps.
>
> Productive is way, way more important that native.

For some people native is necessary. For them D is the way to get productive. Others ideally would use D as well but currently there are more productive options for them, like C# or python.

> […]
>> 
>> Why would they not use D? D is a much better language for them as well. To give some examples, in C++ code there is a ton of boilerplate, while D code hardly has any. Also, the number of bugs in a D program is smaller due to easier unittesting. Also, templates don't cause day-long stop-and-learn sessions as in C++. I don't think those people are a lost market.
>
> Can we drop the unit and just talk about testing. Unit, integration and system testing are all important, focusing always on unit testing is an error.

There's nothing wrong with discussing unittesting on its own. In fact, this is very relevant because its the unittesting that D makes easier. More coarse-grained testing can be done as easily in any other language - you just make some executables for various subsystems and variants of your program and run them in test scenarios.

> As to why not use D? The usual answer is that "no else is using D so we won't use D" for everyone except early adopters.
>
> D needs to remanufacture an early adopter feel. It is almost there: LDC announcing 1.0.0, Dub getting some activity, new test frameworks (can they all lose the unit please in a renaming), rising in TIOBE table. This can all be used to try and get activity. However it needs to be activity of an early adopter style because there are so many obvious problems with the toolchains and developer environments. So let's focus on getting these things improved so that then the people who will only come to a language that has sophistication of developer experience.

As long as those are improvements in getting started fast and time-to-market for D apps, than yes, and that's probably 10 times more important than the both slow GC and poor noGC experience.

>> This is a big issue now due to lack of a comprehensive guide, as well as holes in the language and phobos (strings, exceptions, delegates). C++ doesn't have those holes.
>
> Holes in Phobos can be handled by having third-party things in the Dub repository that are superior to what is in Phobos.

I don't think that third-party libraries can have the reach of Phobos libraries. Also, things as basic as strings should be in the standard library or language, otherwise the whole idea of using D looks ridiculous. Having said that, third-party libraries can help.

> Documentation, guides, and tutorials are a problem, but until someone steps up and contributes this is just going to remain a problem. One that will undermine all attempts to get traction for D. So how to get organizations to put resource into doing this?

I think those things can be easily done by individuals as well, as long as pull request are accepted. But of course until someone steps up... :)
June 07, 2016
On Tuesday, June 07, 2016 06:47:39 H. S. Teoh via Digitalmars-d wrote:
> I can't seem to find an issue I filed some years ago about @safe needing to be whitelist-based rather than blacklist-based. Did it get closed while I wasn't looking?

Walter closed it a day or two ago on the grounds that it wasn't a specific issue but more of a discussion topic:

https://issues.dlang.org/show_bug.cgi?id=12941

In principle, I think that you're very right that @safe needs to be implemented as a whitelist. Security in general does not work as a blacklist, and I think that @safe has the same problem. The problem is code breakage. Even assuming that the change in implementation were straightforward (and I have no idea whether it is or not), it would be pretty much guranteed that we would break a lot of code marked @safe if we were to switch to a whitelist. Some of that code is not truly @safe and really should be fixed, but just throwing the switch like that is too sudden. We'd probably be forced to have both a whitelist and a blaklist and treat the whitelist results as warnings temporarily before switching fully to the whitelist implementation. And that's likely feasible, but it seems like it would be a bit of a mess. So, I don't know if we reasonably can switch to a whitelist or not. But I think that it's clearly that we ideally would.

- Jonathan M Davis

June 07, 2016
On Tuesday, 7 June 2016 at 20:41:08 UTC, Walter Bright wrote:
> Would you want to use a library where the maintainers refuse to use @nogc even if they aren't using the gc?

yes, i do. i'm actively using Adam's arsd libraries, and they doesn't have @nogc spam all over the place, even where functions doesn't use gc. more than that: i tend to ignore @nogc in my code too, almost never bothering to put it. it just doesn't worth the efforts.
June 07, 2016
On Tuesday, June 07, 2016 18:33:01 Dave via Digitalmars-d wrote:
> On Tuesday, 7 June 2016 at 18:24:33 UTC, Walter Bright wrote:
> > On 6/7/2016 11:19 AM, Jack Stouffer wrote:
> >> On Tuesday, 7 June 2016 at 18:15:28 UTC, Walter Bright wrote:
> >>> [...]
> >>
> >> But you can't grep for @system because 99% of the time it's
> >> implicit. This
> >> problem becomes harder to find when using templates for
> >> everything, which I do.
> >
> > Add:
> >    @safe:
> > at the top of your D module and you'll find the @system code.
> > The D compiler is the static analysis tool. It's true that
> > @safe should have been the default, but too much code would
> > break if that were changed. Adding one line to the top of a
> > module is very doable for those that are desirous of adding the
> > safety checks.
> >
> > You can also add:
> >    @nogc:
> > at the top, too. It isn't necessary to tediously annotate every
> > function.
>
> Seems fair. But perhaps phobos should also follow this standard? Which might be why people get the mindset that they have to annotate everything...

IMHO, it's bad practice to mass apply attributes with labels or blocks. It's far too easy to accidentally mark a function with an attribute that you didn't mean to, and it makes it way harder to figure out which attributes actually apply to a function. And when you add templates into the mix, applying attributes en masse doesn't work anyway, because pretty much the only time that you want to mark a template function with an attribute is when the template arguments have nothing to do with whether the attribute is appropriate or not.

So, while mass applying something like @safe temporarily to check stuff makes some sense, I really don't think that it's a good idea to do it in any code that you'd ever commit.

- Jonathan M Davis

June 07, 2016
On Tuesday, 7 June 2016 at 20:48:13 UTC, Jonathan M Davis wrote:
> On Tuesday, June 07, 2016 18:33:01 Dave via Digitalmars-d wrote:
>> [...]
>
> IMHO, it's bad practice to mass apply attributes with labels or blocks. It's far too easy to accidentally mark a function with an attribute that you didn't mean to, and it makes it way harder to figure out which attributes actually apply to a function. And when you add templates into the mix, applying attributes en masse doesn't work anyway, because pretty much the only time that you want to mark a template function with an attribute is when the template arguments have nothing to do with whether the attribute is appropriate or not.
>
> [...]

So we should not follow the advice of Walter?
June 07, 2016
On Tuesday, 7 June 2016 at 19:52:47 UTC, Chris wrote:
> But we agree that templates are a good idea in general, regardless of the actual implementation.

Having access to parametric abstractions is a good idea. How to best use them is not so obvious... in real projects where things changes. (Except for trivial stuff).

> What do you mean by `memory management`? GC/RC built into the compiler?

Everything related to managing ownership and access, making the most out of static analysis. Putting things on the stack, or simply not allocating if not used.

> What do you mean? Is it a good or a bad thing that the library is detached from the core language?

I mean that the standard library features that are closely related to the core language semantics are more stable than things like HTTP.

> Believe me, features will be requested. Would you have an example of how such a language, or better still did you have time to design and test one? A proof of concept.

C++ with just member functions and wrappers around builtins is pretty close. Yes, I have used minimalistic languages, Beta for instance. I believe gBeta is closer. I guess dynamic languages like Self and even Javascript are there too. Probably also some of the dependently typed languages, but I have never used those.

If you stripped down C++  by taking out everything that can be expressed using another feature then you would have the foundation.

> Having to write the same for loop with slight variations all over again is not my definition of efficient programming. One of D's strengths is that it offers nice abstractions for data representation.

Hm? You can have templated functions in C++.

> Not special but handy. Before Java 8 (?) you had to use inner/anonymous classes to mimic lambdas. Not very efficient. Boiler plate, repetition, the whole lot.

Well, that is a matter of implementation. C++ lambdas are exactly that, function objects, but there is no inherent performance penalty.  A "lambda" is mostly syntactical sugar.

> I was not talking about that. Read it again. I said that the D community actively demands features or improvements and uses them.

Are you talking about the language or the standard library? I honestly don't think the latter matter much. Except for memory management.

> It goes without saying that existing features have to be improved. It's a question of manpower.

No, it is a matter of being willing to improve the semantics. Many of the improvements that is needed to best C++ are simple, but slightly breaking, changes.

D could change floats so that interval arithmetics can be implemented. Which is difficult to do in clang/gcc. That would be a major selling point. But the basic reasoning is that this is not needed, because C/C++ fails to comply with the IEEE standard as well.

If the motivation is to trail C/C++, then there is no way to surpass C/C++, then there is no real motivation to switch.

> There is a learning curve that cannot be made flatter. There are concepts that have to be grasped and understood. Any language (cf. Nim) that allows you to do sophisticated and low-level things is harder to learn than JS or Python.

Not sure what sophisticated things you are referring to.

(JS and Python have complexity issues as well, you just don't need to learn them to make good use the languages).

> Go forces you to repeat yourself. The less features you have, the more you have to write the same type of code all over again. Look at all the for loops in a C program.

Loops and writing the same code over is not a major hurdle. Getting it right is the major hurdle. So having many loops is not bad, but having a way to express that you want to iterate from 1 to 10 in a less error-prone way matters.

But you can do that in minimalistic languages like Beta that has only two core entities:

- a pattern (type/function/subclass)
- an instance of a pattern

Just define a pattern that iterates and prefix your body with it, that is the same as subclassing it.

Pseudo-code (not actual Beta syntax, but Cish syntax for simplicty).

iterate:{
   N:@int;
   i: @int;
   enter N
   do
      i = 0
      while (i < N)
      do
          inner; // insert prefixed do-part here
          i++
}

10 -> iterate{ do i -> print; }


Or you could just do


repeat10:{
   N:<{ n:@int; do 10->n; inner; exit n};
   i:@int;
   do N -> iterate{ do inner; }
}

repeat99:repeat10{
  N:<{ do 99->n; inner; }
}

repeat99{ do i -> print; "bottles of wine" ->print }

etc...


June 07, 2016
On Tuesday, 7 June 2016 at 20:55:12 UTC, Ola Fosheim Grøstad wrote:
> repeat10:{
>    N:<{ n:@int; do 10->n; inner; exit n};
>    i:@int;
>    do N -> iterate{ do inner; }
> }
>
> repeat99:repeat10{
>   N:<{ do 99->n; inner; }
> }
>
> repeat99{ do i -> print; "bottles of wine" ->print }


Adding some comments, as the example was not clear on its own:

// repeat10 is a new pattern (class) inheriting from object by default
repeat10:{
    // N is a virtual pattern (function)
    N:<{ n:@int; do 10->n; inner; exit n};
   do N -> iterate{ do inner; }  // this is a subclass of the previously defined iterate pattern
}

// repeat99 is a new pattern inheriting from repeat10 above
repeat99:repeat10{
    // N is a specialization of N in repeat10
    // N expands to {n:@int; do 10->n; 99->n; inner; exit n}
   N:<{ do 99->n; inner; }
}

// this is a subclass of repeat 99
repeat99{ do i -> print; "bottles of wine" ->print }


Give or take, I haven't used Beta in 20 years. Abstractions is not the problem, a very simple language can provide what most programmers need. Perhaps not with a familiar syntax though.

June 07, 2016
On Tuesday, June 07, 2016 20:52:15 Dave via Digitalmars-d wrote:
> On Tuesday, 7 June 2016 at 20:48:13 UTC, Jonathan M Davis wrote:
> > On Tuesday, June 07, 2016 18:33:01 Dave via Digitalmars-d wrote:
> >> [...]
> >
> > IMHO, it's bad practice to mass apply attributes with labels or blocks. It's far too easy to accidentally mark a function with an attribute that you didn't mean to, and it makes it way harder to figure out which attributes actually apply to a function. And when you add templates into the mix, applying attributes en masse doesn't work anyway, because pretty much the only time that you want to mark a template function with an attribute is when the template arguments have nothing to do with whether the attribute is appropriate or not.
> >
> > [...]
>
> So we should not follow the advice of Walter?

If he's arguing that you should slap an attribute on the top of your module to apply to everything, then no, I don't think that we should follow his advice. He's a very smart guy, but he's not always right. And in my experience, mass applying attributes is a mistake.

- Jonathan M Davis