March 31, 2021
On Wednesday, 31 March 2021 at 04:49:52 UTC, Andrei Alexandrescu wrote:

> That makes their umbrella claim "Zig is faster than C" quite specious.

The reason, or one of the reasons, why Zig is/can be faster than C is that is uses different default optimization levels. For example, Zig will by default target your native CPU instead of some generic model. This allows to enable vectorization, SSE/AVX and so on.

--
/Jacob Carlborg


March 31, 2021
On Tuesday, 30 March 2021 at 06:43:04 UTC, Walter Bright wrote:

>
> Compile-time isn't a run-time performance issue.

Performance is irrelevant to the fact that D frivolously violates basic assumptions about float/double at compile-time.
March 31, 2021
On Wednesday, 31 March 2021 at 09:47:46 UTC, Jacob Carlborg wrote:
> On Wednesday, 31 March 2021 at 04:49:52 UTC, Andrei Alexandrescu wrote:
>
>> That makes their umbrella claim "Zig is faster than C" quite specious.
>
> The reason, or one of the reasons, why Zig is/can be faster than C is that is uses different default optimization levels. For example, Zig will by default target your native CPU instead of some generic model. This allows to enable vectorization, SSE/AVX and so on.
>
> --
> /Jacob Carlborg

Specific Example? GCC and LLVM are both almost rabid when you turn the vectorizer on
March 31, 2021
On Wednesday, 31 March 2021 at 11:18:05 UTC, Max Samukha wrote:
> On Tuesday, 30 March 2021 at 06:43:04 UTC, Walter Bright wrote:
>
>>
>> Compile-time isn't a run-time performance issue.
>
> Performance is irrelevant to the fact that D frivolously violates basic assumptions about float/double at compile-time.

Like?
March 31, 2021
On 3/31/21 3:52 AM, Walter Bright wrote:
> On 3/31/2021 12:31 AM, Vladimir Panteleev wrote:
>> - Silicon will keep getting faster and cheaper with time
>>
>> - A 7% or a 14% or even a +100% slowdown is relatively insignificant considering the overall march of progress - Moore's law, but also other factors such as the average size and complexity of programs, which will also keep increasing as people expect software to do more things, which will drown out such "one-time" slowdowns as integer overflow checks
> 
> If you're running a data center, 1% translates to millions of dollars.

Factually true. Millions of dollars a year that is.

It's all about the clientele. There will always be companies that must get every bit of performance. Weka.IO must be fastest. If they were within 15% of the fastest, they'd be out of business.
March 31, 2021
On 3/31/21 7:46 AM, Max Haughton wrote:
> On Wednesday, 31 March 2021 at 09:47:46 UTC, Jacob Carlborg wrote:
>> On Wednesday, 31 March 2021 at 04:49:52 UTC, Andrei Alexandrescu wrote:
>>
>>> That makes their umbrella claim "Zig is faster than C" quite specious.
>>
>> The reason, or one of the reasons, why Zig is/can be faster than C is that is uses different default optimization levels. For example, Zig will by default target your native CPU instead of some generic model. This allows to enable vectorization, SSE/AVX and so on.
>>
>> -- 
>> /Jacob Carlborg
> 
> Specific Example? GCC and LLVM are both almost rabid when you turn the vectorizer on

Even if that's the case, "we choose to use by default different flags that make the code more specialized and therefore faster and less portable" can't be a serious basis of a language performance claim.
March 31, 2021
On 3/31/21 3:58 AM, Vladimir Panteleev wrote:
> On Wednesday, 31 March 2021 at 07:52:31 UTC, Walter Bright wrote:
>> On 3/31/2021 12:31 AM, Vladimir Panteleev wrote:
>>> - Silicon will keep getting faster and cheaper with time
>>>
>>> - A 7% or a 14% or even a +100% slowdown is relatively insignificant considering the overall march of progress - Moore's law, but also other factors such as the average size and complexity of programs, which will also keep increasing as people expect software to do more things, which will drown out such "one-time" slowdowns as integer overflow checks
>>
>> If you're running a data center, 1% translates to millions of dollars.
> 
> You would think someone would have told that to all the companies running their services written in Ruby, JavaScript, etc.

Funny how things work out isn't it :o).

> Unfortunately, that hasn't been the case.

It is. I know because I collaborated with the provisioning team at Facebook.
March 31, 2021
On Wednesday, 31 March 2021 at 12:38:51 UTC, Andrei Alexandrescu wrote:
>> You would think someone would have told that to all the companies running their services written in Ruby, JavaScript, etc.
>
> Funny how things work out isn't it :o).
>
>> Unfortunately, that hasn't been the case.
>
> It is. I know because I collaborated with the provisioning team at Facebook.

I don't understand what you mean by this.

Do you and Facebook have a plan to forbid the entire world from running Ruby, JavaScript etc. en masse on datacenters?

March 31, 2021
On 3/31/21 8:40 AM, Vladimir Panteleev wrote:
> On Wednesday, 31 March 2021 at 12:38:51 UTC, Andrei Alexandrescu wrote:
>>> You would think someone would have told that to all the companies running their services written in Ruby, JavaScript, etc.
>>
>> Funny how things work out isn't it :o).
>>
>>> Unfortunately, that hasn't been the case.
>>
>> It is. I know because I collaborated with the provisioning team at Facebook.
> 
> I don't understand what you mean by this.
> 
> Do you and Facebook have a plan to forbid the entire world from running Ruby, JavaScript etc. en masse on datacenters?

Using languages has to take important human factors into effect, e.g. Facebook could not realistically switch from PHP/Hack to C++ in the front end (though the notion does come up time and again). It is factually true that to a large server farm performance percentages translate into millions.
March 31, 2021
On Wednesday, 31 March 2021 at 12:36:42 UTC, Andrei Alexandrescu wrote:
> On 3/31/21 7:46 AM, Max Haughton wrote:
>> On Wednesday, 31 March 2021 at 09:47:46 UTC, Jacob Carlborg wrote:
>>> On Wednesday, 31 March 2021 at 04:49:52 UTC, Andrei Alexandrescu wrote:
>>>
>>>> That makes their umbrella claim "Zig is faster than C" quite specious.
>>>
>>> The reason, or one of the reasons, why Zig is/can be faster than C is that is uses different default optimization levels. For example, Zig will by default target your native CPU instead of some generic model. This allows to enable vectorization, SSE/AVX and so on.
>>>
>>> --
>>> /Jacob Carlborg
>> 
>> Specific Example? GCC and LLVM are both almost rabid when you turn the vectorizer on
>
> Even if that's the case, "we choose to use by default different flags that make the code more specialized and therefore faster and less portable" can't be a serious basis of a language performance claim.

Intel C++ can be a little naughty with the fast math options, last time I checked, for example - gotta get those SPEC numbers!

I wonder if there is a way to leverage D's type system (or even extend it to allow) to allow a library solution that can hold information which the optimizer can use to elide these checks in most cases. It's probably possible already by just passing some kind of abstract interpretation like data structure as a template parameter, but this is not very ergonomic.

Standardizing some kind of `assume` semantics strikes me as a good long term hedge for D, even if doing static analysis and formal verification of D code is an unenviable task.