September 18, 2022
On 18.09.22 03:14, Paul Backus wrote:
> On Saturday, 17 September 2022 at 20:08:26 UTC, Dennis wrote:
>> On Saturday, 17 September 2022 at 07:37:24 UTC, Ogi wrote:
>>> Non-ASCII characters in identifiers. There is no good reason to use this “feature”, unless your goal is bug-ridden and unmaintainable code.
>>
>> I don't use non-ASCII identifiers, but you'll have to explain how e.g. using `π` instead of `pi` results in bugs or maintenance burden.
> 
> I would say not being able to easily type an identifier with the keys on one's keyboard counts as a maintenance burden.

There really is no very good excuse for not being able to easily type such common characters, provided one is able to easily type at all. Programmers of all people should be able to figure it out.

In any case, not all identifiers are English. As Ali likes to point out, some Turkish characters have no obvious ASCII substitute and substituting visually similar characters may result in curse words in identifiers. In such cases, all of the maintainers know how to type in Turkish.

People are sometimes prone to dismissing the validity of other cultures. I am glad D's lexer is not.
September 18, 2022
On 18.09.22 08:47, Nicholas Wilson wrote:
> On Thursday, 15 September 2022 at 08:40:22 UTC, Dukc wrote:
>> On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright wrote (on the thread about binary literals):
>>> On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
>>>> But it doesn't disprove the fact that *sometimes*, hex digits aren't as clear.
>>>
>>> Does sometimes justify a language feature, when there are other ways?
>>>
>>> People often complain that D has too many features. What features would you say are not worth it?
>>
>> This is a good question, but would quickly derail the original thread from it's topic, so I decided to start a new one.
>>
>> What features could be removed from D if it were up to you? Please consider the breakage that would result from the removal, don't settle on thinking what shouldn't have been added in the first place.
> 
> real, specifically x86 80-bit,
> * its slow, you can't vectorise it
> * the extra precision isn't much use
> * use is often unintentional, and when it is the intention is almost always misguided. If you have precision problems a couple of bits of extra precision won't make up for a horrible convergence rate and if it does, you probably need an arbitrary precision type anyway.

As someone who has had to hack 80 bit floating point support into C++ on Windows with inline assembly, I am glad it is supported. Unintentional use should be curbed though. Also, the added precision is more than a couple of bits and sometimes you just need that resolution. 80 bit floats are faster than arbitrary precision floats, so it provides a nice performance boost in that region, before you have to fall back on software solutions. (There is a trick where you use two or more doubles to represent a number with more mantissa bits though and it is possible that with AVX, performance may be competitive with 80 bit floats, but vectorising that code manually is more work.)
September 18, 2022
On 18/09/2022 7:32 PM, Timon Gehr wrote:
> People are sometimes prone to dismissing the validity of other cultures. I am glad D's lexer is not.

Except it does.

The definitions it uses are an unknown amount of years old (potentially 20 years old) and have long since been superseded.

September 18, 2022

On Thursday, 15 September 2022 at 08:40:22 UTC, Dukc wrote:

>

What features could be removed from D if it were up to you? Please consider the breakage that would result from the removal, don't settle on thinking what shouldn't have been added in the first place.

std.json

September 18, 2022
On 9/18/2022 12:50 AM, Timon Gehr wrote:
> As someone who has had to hack 80 bit floating point support into C++ on Windows with inline assembly, I am glad it is supported. Unintentional use should be curbed though. Also, the added precision is more than a couple of bits and sometimes you just need that resolution. 80 bit floats are faster than arbitrary precision floats, so it provides a nice performance boost in that region, before you have to fall back on software solutions.

Just like with cars where there is no substitute for horsepower, there's no substitute for extra precision.

Yes, there are known techniques for dealing with roundoff error, but these are complex and tricky and are for experts in this sort of thing. Adding more precision often "just works" for the rest.


> (There is a trick where you use two or more doubles to represent a number with more mantissa bits though and it is possible that with AVX, performance may be competitive with 80 bit floats, but vectorising that code manually is more work.)

I didn't know about this. Is there an article on how it works?

September 18, 2022
Enzo Ferrari once said: "The secret to better performance is more horsepower."

A movie line: "Turbos are for wussies. This car has cubic inches."

September 18, 2022
On Sunday, 18 September 2022 at 20:33:03 UTC, Walter Bright wrote:
> I didn't know about this. Is there an article on how it works?

Search for PowerPC DoubleDouble
September 18, 2022
On 9/18/2022 3:34 PM, Nicholas Wilson wrote:
> On Sunday, 18 September 2022 at 20:33:03 UTC, Walter Bright wrote:
>> I didn't know about this. Is there an article on how it works?
> 
> Search for PowerPC DoubleDouble

Yes, thank you. Turned up lots of articles.

September 18, 2022
On Sunday, 18 September 2022 at 20:33:03 UTC, Walter Bright wrote:
> Just like with cars where there is no substitute for horsepower, there's no substitute for extra precision.

Yes there is. Use an algorithm with better convergence properties or that is more numerically stable. If you have a problem with precision, you have other problems. You should fix the other problems first.

> Yes, there are known techniques for dealing with roundoff error,

The cases when roundoff is a problem is usually one of:
* catastrophic loss of precision from (something like) finite difference derivatives (algorithmic fix: use dual numbers aka automatic differentiation),
* massively different scales of the data you are working with. Your problem is poorly conditioned anyway, and you should try to separate the scales of your problem.
* you are doing _so_ many operations that (unacceptable) drift occurs, in which case why are you doing so many ops? Usually its from something like an alternating series summation with terrible convergence. Algorithmic fix, fix your convergence properties.

> but these are complex and tricky and are for experts in this sort of thing.

Libraries are a thing, mir for example has many solutions to these kinds of problems. The only expertise required is to recognise you have a problem that would be fixed by one of them.

>Adding more precision often "just works" for the rest.

it is a bandaid fix that doesn't fix the root cause of your problem.
September 19, 2022
On 18.09.22 22:33, Walter Bright wrote:
> 
>> (There is a trick where you use two or more doubles to represent a number with more mantissa bits though and it is possible that with AVX, performance may be competitive with 80 bit floats, but vectorising that code manually is more work.)
> 
> I didn't know about this. Is there an article on how it works?

I picked it up back in high school for a CPU-based fractal rendering project using vectorization, multithreading and some clever algorithms for avoiding computations (I should port it to D and publish it, a lot of it is Intel-style x86 inline assembly.) Unfortunately I don't recall what are all the online resources I found. I only needed addition and multiplication. Addition is rather straightforward, but multiplication required splitting the mantissa with Dekker's trick.

I think this is the original paper (also crediting several others with similar ideas, e.g., Kahan summation is in Phobos. The multi-double data types are basically a generalization of that idea to other operations):
https://csclub.uwaterloo.ca/~pbarfuss/dekker1971.pdf

This seems to be a more recent account, maybe it is easier to read:
https://web.mit.edu/tabbott/Public/quaddouble-debian/qd-2.3.4-old/docs/qd.pdf

I also found this Julia library: https://github.com/JuliaMath/DoubleFloats.jl

I used the search terms "dekker double-double floating point", there might be even better articles out there.

(This is an example of an application where extending precision implicitly for some subset of calculations can give you worse overall results.)