June 08, 2016
On Wednesday, 8 June 2016 at 13:43:27 UTC, Timon Gehr wrote:
> On 08.06.2016 01:59, Walter Bright wrote:
>> ...
>>
>> I suspect D has long since passed point where it is too complicated for
>> the rather limited ability of mathematicians to prove things about it.
>
> The main reason why it is currently impractical to prove things about D is that D is not really a mathematical object. I.e. there is no precise spec.

Besides that, even if a @safe checker is slightly flawed, it only has to be vetted better than the backend which most likely is unverified anyway.

This is different from some of the static analysis done on C, which convert the LLVM bitcode or even X86 assembly into a format that can be queried using a solver. That way the proof holds even in the case where the backend is buggy.

June 08, 2016
On 6/8/16 3:43 PM, Timon Gehr wrote:
> On 08.06.2016 01:59, Walter Bright wrote:
>> ...
>>
>> I suspect D has long since passed point where it is too complicated for
>> the rather limited ability of mathematicians to prove things about it.
>
> The main reason why it is currently impractical to prove things about D
> is that D is not really a mathematical object. I.e. there is no precise
> spec.

Walter and I have spoken about the matter and reached the conclusion that work on a formal spec (be it in legalese, typing trees, small step semantics etc) on a reduced form of D would be very beneficial.

We are very much supportive of such work.


Andrei

June 09, 2016
On Monday, 6 June 2016 at 02:20:52 UTC, Walter Bright wrote:
> Andrei posted this on another thread. I felt it deserved its own thread. It's very important.
> -----------------------------------------------------------------------------
> I go to conferences. Train and consult at large companies. Dozens every year, cumulatively thousands of people. I talk about D and ask people what it would take for them to use the language. Invariably I hear a surprisingly small number of reasons:
>
> * The garbage collector eliminates probably 60% of potential users right off.
>
> * Tooling is immature and of poorer quality compared to the competition.
>
> * Safety has holes and bugs.
>
> * Hiring people who know D is a problem.
>
> * Documentation and tutorials are weak.
>
> * There's no web services framework (by this time many folks know of D, but of those a shockingly small fraction has even heard of vibe.d). I have strongly argued with Sönke to bundle vibe.d with dmd over one year ago, and also in this forum. There wasn't enough interest.
>
> * (On Windows) if it doesn't have a compelling Visual Studio plugin, it doesn't exist.
>
> * Let's wait for the "herd effect" (corporate support) to start.
>
> * Not enough advantages over the competition to make up for the weaknesses above.

Hello,

I have to stress I am beginner in programming, mainly interested in number crunching in academia (at least so far). I started to write a small project in D, but had to switch to C for few reasons:

1) Importance for my CV. I know Python, if I add also C - it sounds, and could be useful since the C language is, apart from the other reasons, is popular and could help me wit the future job, both in academia and industry, since there are many C/C++ projects.

2) The libraries - in the scientific world you can find practically everything which has already been coded in C, => many C libraries. To link it to be used within D code requires some work/efforts, and since I am not that confident in my IT skills, I decided that C code calling C libraries is much safer.

3) For C - a lot of tutorials, everything has been explained at stack overflow many times, huge community of people. E.g. you want to use OpenMP, Open MPI - everything is there, explained many times, etc.

4) The C language is well tested and rock solid stable. However, if you encounter a potential bug in D, I am not sure how long would it take to fix.

5) Garbage collector - it will slow my number crunching down.

Please, do not take it as criticism, I like D language, I tried it before C and I find it much much easier, and user friendly. I feel it is more similar to Python. On the other hand C++ is too complex for me, and D would be the perfect option for the scientific community, if the above points would be fixed somehow..

Best luck with your work!
June 09, 2016
On Thursday, 9 June 2016 at 16:44:23 UTC, Yura wrote:
> 5) Garbage collector - it will slow my number crunching down.
>

You are a scientist, so try to measure. GC generally improves throughput at the cost of latency.

June 09, 2016
On Thursday, 9 June 2016 at 18:02:05 UTC, deadalnix wrote:
> You are a scientist, so try to measure. GC generally improves throughput at the cost of latency.

As a side note, I always found it funny that programmers call themselves "computer scientists" while many write a lot of their programs without tests.
June 09, 2016
On Thursday, 9 June 2016 at 20:38:30 UTC, Jack Stouffer wrote:
> On Thursday, 9 June 2016 at 18:02:05 UTC, deadalnix wrote:
>> You are a scientist, so try to measure. GC generally improves throughput at the cost of latency.
>
> As a side note, I always found it funny that programmers call themselves "computer scientists" while many write a lot of their programs without tests.

A ton of computer science, even the one that is peer reviewed, do not publish code. It's garbage...

And then you look at https://twitter.com/RealPeerReview and conclude it maybe isn't that bad.

June 09, 2016
On 6/9/2016 9:44 AM, Yura wrote:
> 4) The C language is well tested and rock solid stable. However, if you
> encounter a potential bug in D, I am not sure how long would it take to fix.

Thanks for taking the time to post here.

Yes, there are bugs in D. Having dealt with buggy compilers from every vendor for decades, I can speak from experience that almost every bug has workarounds that will keep the project moving.

Also, bugs in D tend to be with the advanced features. But there's a C-ish subset that's nearly a 1:1 correspondence to C, and if you are content with C style it'll serve you very well.
June 09, 2016
On 6/9/2016 1:38 PM, Jack Stouffer wrote:
> On Thursday, 9 June 2016 at 18:02:05 UTC, deadalnix wrote:
>> You are a scientist, so try to measure. GC generally improves throughput at
>> the cost of latency.
>
> As a side note, I always found it funny that programmers call themselves
> "computer scientists" while many write a lot of their programs without tests.

A scientist is someone who does research to make discoveries, while an engineer puts scientific discoveries to work.

Programming is a mix of engineering and craft. There are people who do research into programming theory, and those are computer scientists. I'm not one of them. Andrei is.
June 09, 2016
On Thursday, 9 June 2016 at 21:46:28 UTC, Walter Bright wrote:
> Programming is a mix of engineering and craft. There are people who do research into programming theory, and those are computer scientists. I'm not one of them. Andrei is.

Unfortunately, the term "software engineer" is a LOT less popular than "computer scientist".
June 10, 2016
On Thursday, 9 June 2016 at 21:54:05 UTC, Jack Stouffer wrote:
> On Thursday, 9 June 2016 at 21:46:28 UTC, Walter Bright wrote:
>> Programming is a mix of engineering and craft. There are people who do research into programming theory, and those are computer scientists. I'm not one of them. Andrei is.
>
> Unfortunately, the term "software engineer" is a LOT less popular than "computer scientist".

How so? I only hear people use the term "programmer" or "informatics".

Computer Science -> pure math / classification / concepts.

Software Engineering ->  process of developing software.

At my uni we had the term "informatics"  which covers both comps.sci., software engineering and requirements analysis, human factors etc. But it IS possible to be a computer scientist and only know math and no actual programming. Not common, but possible.

But yes, sometimes people who have not studied compsci, but only read stuff on wikipedia engage in debates as if they knew the topic and then the distinction matters. There are things you never have to explain to a person who knows compsci, but you almost always have trouble explaining to people who don't know it (but think they do, because they are programmers and have seen big-oh notation in documentation).

It is like a car engineering listening to a driver claiming that you should pour oil on your breaks if they make noise. Or a mathematician having to explain what infinity entails. At some point it is easier to just make a distinction between those who know the fundamental things about how brakes actually are constructed, and those who know how to drive a car.

The core difference as far as debates goes, is that comp sci is mostly objective (proofs) and software engineering is highly subjective (contextual practice).

So, if the topic is compsci then you usually can prove that the other person is wrong in a step-by-step irrefutable fashion. Which makes a big difference, actually. People who know compsci usually think that is ok, because they like to improve their knowledge and are used to getting things wrong (that's how you learn). People who don't know compsci usually don't like it becuase they are being told that they don't know something they like to think they know (but actually don't and probably never will).

That's just the truth... ;-)