July 05, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On 5 July 2014 16:13, via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Saturday, 5 July 2014 at 15:09:28 UTC, Iain Buclaw via Digitalmars-d wrote:
>>
>> This is a library problem, not a language problem. In this case std.math uses real everywhere when perhaps it shouldn't.
>
>
> If x/y leads to a division by zero trap when it should not, then it isn't a library problem.
>
Right, it's a quirk of the CPU.
|
July 05, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Iain Buclaw | On Saturday, 5 July 2014 at 16:24:28 UTC, Iain Buclaw via Digitalmars-d wrote: > Right, it's a quirk of the CPU. It's a precision quirk of floating point that has to be defined, and different CPUs follow different definitions. Within IEEE754 it can of course also differ, since it does not prevent higher precision than specified. http://en.wikipedia.org/wiki/Denormal_number |
July 06, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | Ola Fosheim Grøstad: > Both Delight and FeepingCreature appears to be alive. I guess that is a good sign. I still like the idea of having in D something like the F# language directive "#light" that switches to a brace-less syntax: http://msdn.microsoft.com/en-us/library/dd233199.aspx I think at first the verbose syntax was the standard one in F#, but later the light one has become the standard one. Bye, bearophile |
July 06, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to bearophile | On Sunday, 6 July 2014 at 17:19:24 UTC, bearophile wrote:
> I still like the idea of having in D something like the F# language directive "#light" that switches to a brace-less syntax:
I like compact syntax, like the reduction in clutter done with Go.
I've been thinking a lot about how to improve the assignments in C-like languages. Seems to me that when it comes to immutable values it makes sense to deal with them in a functional style. I.e. there is no real need to distinguish between a comparison of equality and a defined equality (assignment). Of course, that means boolean contexts have to be made explicit to avoid ambiguity.
IMO reducing clutter becomes really important when you use the language for describing content that needs to be modified and extended.
|
July 08, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Friday, 4 July 2014 at 17:05:16 UTC, Walter Bright wrote: > On 7/4/2014 3:38 AM, Don wrote: >> What is "the longest type supported by the native hardware"? I don't know what >> that means, and I don't think it even makes sense. > > Most of the time, it is quite clear. > >> For example, Sparc has 128-bit quads, but they only have partial support. >> Effectively. they are emulated. Why on earth would you want to use an emulated >> type on some machines, but not on others? > > Emulation is not native support. I think the only difference it makes is performance. But there is not very much difference in performance between double-double, and implementations using microcode. Eg PowerPC double-doubles operations require fewer clock cycles than x87 operations on 286. >> Perhaps the intention was "the largest precision you can get for free, without >> sacrificing speed" then that's not clearly defined. On x86-32, that was indeed >> 80 bits. But on other systems it doesn't have an obvious answer. >> On x86-64 it's not that simple. Nor on PPC or Sparc. > > Yes, there is some degree of subjectivity on some platforms. I don't see a good reason for hamstringing the compiler dev with legalese for Platform X with legalese that isn't quite the right thing to do for X. I agree. But I think we can achieve the same outcome while providing more semantic guarantees to the programmer. > I think the intention of the spec is clear, and the compiler implementor can be relied on to exercise good judgement. The problem is that the developer cannot write code without knowing the semantics. For example, one of the original motivations for having 80-bit floats on the x87 was that for many functions, they give you correctly-rounded results for 'double' precision. If you don't have 80-bit reals, then you need to use far more complicated algorithms. If your code needs to work on a system with only 64 bit reals, then you have to do the hard work. Something I've come to realize, was that William Kahan's work was done in a world before generic programming. He had constraints that we don't have. Our metaprogramming system gives us great tools to get the highest accuracy and performance out of any processor. We can easily cope with the messy reality of real-world systems, we don't need to follow Java in pretending they are all the same. This is something we're good at! A 'real' type that has system-dependent semantics is a poor-man's generics. Take a look at std.math, and see all the instances of 'static if (real.mant_dig == '. Pretty clearly, 'real' is acting as if it were a template parameter. And my experience is that any code which doesn't treat 'real' as a template parameter, is probably incorrect. I think we should acknowledge this. |
Copyright © 1999-2021 by the D Language Foundation