March 28, 2020
On Sat, Mar 28, 2020 at 08:22:14PM +0000, Adam D. Ruppe via Digitalmars-d wrote:
> On Saturday, 28 March 2020 at 20:08:18 UTC, krzaq wrote:
> > I would love that. This is one of the things that Rust got 100% right.
> > 
> > I know you can make whatever alias you want, but the point is that it's not universally used.  Standarization is important, that's why I'd use the fugly C names in C/C++ even though I could alias them away.
> 
> D did standardize, there's no question as to size in D as it is now.

+1.

All except 'real', that is, and that has turned into a mini-disaster.


T

-- 
Your inconsistency is the only consistent thing about you! -- KD
March 28, 2020
On Saturday, 28 March 2020 at 20:22:14 UTC, Adam D. Ruppe wrote:
> On Saturday, 28 March 2020 at 20:08:18 UTC, krzaq wrote:
>> I would love that. This is one of the things that Rust got 100% right.
>>
>> I know you can make whatever alias you want, but the point is that it's not universally used.  Standarization is important, that's why I'd use the fugly C names in C/C++ even though I could alias them away.
>
> D did standardize, there's no question as to size in D as it is now.

I'm not disputing that. I'm saying that D standarized the wrong names. Rust hit the bullseye, while C/C++ is somewhere in the middle. And the alias argument is IMO weak because of it being non-standard.
March 29, 2020
On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin wrote:
> On Saturday, 28 March 2020 at 19:50:44 UTC, NaN wrote:
>
>>
>> Dont design based on imaginings of the future, you will almost always get it wrong.
>
> This is almost already reality, not future.

I was responding to your statement regarding FPGAs. If they become ubiquitous, and if people want to use D to program them, and if someone does the work to make it happen, then maybe different width basic types *might* be needed.


> Just make survey around your friends/collegues about: what is a byte? Then compare with wikipedia/dictionary/RFC/etc definition. You will be very surprised.
>
> Already, it is difficult for a beginner to explain why the double is 64 bits. And if it is double from integer why integer is 32. I think it is no need to spend time by explaining whole IT history.

I'm struggling to understand why anyone would find it either hard to understand or difficult to explain...

float is a 32 bit floating point number
double is a 64 bit floating point number

Lets be honest, if that is causing you problems then you probably need to reconsider your career path.
March 29, 2020
On Sunday, 29 March 2020 at 00:19:57 UTC, NaN wrote:
>
> I'm struggling to understand why anyone would find it either hard to understand or difficult to explain...
>
> float is a 32 bit floating point number
> double is a 64 bit floating point number
>
> Lets be honest, if that is causing you problems then you probably need to reconsider your career path.

It's for clarity, ease of use and the purpose not naming things inconsistently.

It's like the ridiculous USB speeds. Low Speed, Full Speed, High Speed, SuperSpeed, SuperSpeed+. Now, tell me what throughput speed these names correspond to, without looking it up on the internet.


March 29, 2020
On Sunday, 29 March 2020 at 00:19:57 UTC, NaN wrote:
> On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin wrote:
>> On Saturday, 28 March 2020 at 19:50:44 UTC, NaN wrote:
>>
>>>
>>> Dont design based on imaginings of the future, you will almost always get it wrong.
>>
>> This is almost already reality, not future.
>
> I was responding to your statement regarding FPGAs. If they become ubiquitous, and if people want to use D to program them, and if someone does the work to make it happen, then maybe different width basic types *might* be needed.
>
>
>> Just make survey around your friends/collegues about: what is a byte? Then compare with wikipedia/dictionary/RFC/etc definition. You will be very surprised.
>>
>> Already, it is difficult for a beginner to explain why the double is 64 bits. And if it is double from integer why integer is 32. I think it is no need to spend time by explaining whole IT history.
>
> I'm struggling to understand why anyone would find it either hard to understand or difficult to explain...
>
> float is a 32 bit floating point number
> double is a 64 bit floating point number
>
> Lets be honest, if that is causing you problems then you probably need to reconsider your career path.

It's not hard to understand. It's pointless memorization though, as those names and their binding to sizes are based on implementation details of processors from the *previous millenium*. Programming languages should aim to lower the cognitive load of their programmers, not the opposite.

To paraphrase your agument:
A mile is 1760 yards
A yard is 3 feet
A foot is 12 inches
What's so hard to understand? If that is causing you problems then you probably need to reconsider your career path.
March 29, 2020
On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin wrote:
[...]
> Just make survey around your friends/collegues about: what is a byte? Then compare with wikipedia/dictionary/RFC/etc definition. You will be very surprised.
[...]

The Wikipedia article clearly states that definitions of "byte" other than 8 bits are *historical*, and that practically all modern hardware has standardized on the 8-bit byte.  I don't understand why this is even in dispute in the first place.  Frankly, it smells like just a red herring.
March 29, 2020
On Sunday, 29 March 2020 at 00:48:15 UTC, krzaq wrote:
> On Sunday, 29 March 2020 at 00:19:57 UTC, NaN wrote:
>> On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin wrote:
>>> On Saturday, 28 March 2020 at 19:50:44 UTC, NaN wrote:
>> float is a 32 bit floating point number
>> double is a 64 bit floating point number
>>
>> Lets be honest, if that is causing you problems then you probably need to reconsider your career path.
>
> It's not hard to understand. It's pointless memorization though, as those names and their binding to sizes are based on implementation details of processors from the *previous millenium*.

Firstly either way you have to remember something, u16 or short. So there's memorization whatever way you slice it.

Secondly those processors from the last millennium are still the dominant processors of this millennium.


> Programming languages should aim to lower the cognitive load of their programmers, not the opposite.

I agree, but this is so irrelevant it's laughable.


> To paraphrase your agument:
> A mile is 1760 yards
> A yard is 3 feet
> A foot is 12 inches
> What's so hard to understand? If that is causing you problems then you probably need to reconsider your career path.

If your job requires you to work in inches, feet and yards every single day then yes you should know that off the top of your head and you shouldn't even have to try.

And if you find it difficult then yes you should reconsider your career path. If you struggle with basic arithmetic then you shouldn't really be looking at a career in engineering.


March 29, 2020
On Sunday, 29 March 2020 at 00:58:12 UTC, H. S. Teoh wrote:
> On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin wrote:
> [...]
>> Just make survey around your friends/collegues about: what is a byte? Then compare with wikipedia/dictionary/RFC/etc definition. You will be very surprised.
> [...]
>
> The Wikipedia article clearly states that definitions of "byte" other than 8 bits are *historical*, and that practically all modern hardware has standardized on the 8-bit byte.  I don't understand why this is even in dispute in the first place.  Frankly, it smells like just a red herring.

smells like a troll to me, best not to feed it
March 29, 2020
On Sunday, 29 March 2020 at 01:21:25 UTC, NaN wrote:
> Firstly either way you have to remember something, u16 or short. So there's memorization whatever way you slice it.

But you don't have to remember anything other than what you want to use. When you want a 16 bit unsigned integer you don't have to mentally lookup the type you want, because you already spelled it. And if you see a function accepting a long you don't have to think "Is this C? If so, is this Windows or not(==is this LP64)? Or maybe it's D? But what was the size of a long in D? oh, 64"

If you argued that when you want just an integer then you shouldn't need to provide its size, I'd grant you a point. But if anything, `int` should be an alias to whatever is fast on current arch, not the other way around.

> Secondly those processors from the last millennium are still the dominant processors of this millennium.

Are they really? I have more ARMs around me than I do x86's. Anyway, they're compatible, but not the same. "double precision" doesn't really mean much outside of hardcore number crunching, and short is (almost?) never used as an optimization on integer, but a limitation of its domain. And, at least for C and C++, any style guide will tell you to use a type with a meaningful name instead.

>> Programming languages should aim to lower the cognitive load of their programmers, not the opposite.
>
> I agree, but this is so irrelevant it's laughable.
>

It is very relevant. Expecting the programmer to remember that some words mean completely different things than anywhere else is not good, and the more of those differences you have, the more difficult it is to use the language. And it's just not the type names, learning that you have to use enum instead of immutable or const for true constants was just as mind-boggling to me as learning that inline means anything but inline in C++.

>
>> To paraphrase your agument:
>> A mile is 1760 yards
>> A yard is 3 feet
>> A foot is 12 inches
>> What's so hard to understand? If that is causing you problems then you probably need to reconsider your career path.
>
> If your job requires you to work in inches, feet and yards every single day then yes you should know that off the top of your head and you shouldn't even have to try.
>
> And if you find it difficult then yes you should reconsider your career path. If you struggle with basic arithmetic then you shouldn't really be looking at a career in engineering.

That's circular reasoning. The whole argument is that your day job shouldn't require rote memorization of silly incantations. As for "basic arithmetic" - there is a reason why the whole world, bar one country, moved to a sane unit system.
March 29, 2020
On Friday, 27 March 2020 at 16:59:47 UTC, Paolo Invernizzi wrote:
> On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer wrote:
>> There have been a lot of this pattern happening:
>>
>
>> 7. virtual by default
>
> You mean final, by default, right?

"final" attribute does no mean non-virtual. A final method cannot be overridden, which offers oportunities to devirtualize the calls, and that's it.