November 25, 2014
On Tuesday, 25 November 2014 at 15:42:13 UTC, Kagamin wrote:
> Correctness is an emergent property - when behavior matches expectation, so overflow has variable correctness in various parts of the code.

I assume you are basically saying that Walter's view that matching C++ is more important than getting it right, because some people might expect C++ behaviour. Yet Ada chose a different path and is considered a better language with respect to correctness.

I think it is important to get the definitions consistent and sound so they are easy to reason about, both for users and implementors. So one should choose whether the type is primarily monotonic, with incorrect values "truncated into" modulo N, or if the type is primarily modular.

If addition is defined to be primarily monotonic it means you can optimize "if(x < x+1)…" into "if (true)…". If it is defined to be primarily modular, then you cannot.
November 25, 2014
On Tuesday, 25 November 2014 at 15:52:22 UTC, Ola Fosheim Grøstad wrote:
> I assume you are basically saying that Walter's view that matching C++ is more important than getting it right, because some people might expect C++ behaviour. Yet Ada chose a different path and is considered a better language with respect to correctness.

C++ legacy is huge especially in culture. That said, the true issue is in beliefs (which probably stem from 16-bit era). Can't judge Ada, have no experience with it, though examples of Java and .net show how marginal is importance of unsigned types.

> I think it is important to get the definitions consistent and sound so they are easy to reason about, both for users and implementors. So one should choose whether the type is primarily monotonic, with incorrect values "truncated into" modulo N, or if the type is primarily modular.

In this light examples by Marco Leise become interesting, he tries to evade wrapping even for unsigned types, so, yes types are primarily monotonic and optimized for small values.

> If addition is defined to be primarily monotonic it means you can optimize "if(x < x+1)…" into "if (true)…". If it is defined to be primarily modular, then you cannot.

Such optimizations have a bad reputation. If they were more conservative and didn't propagate back in code flow, the situation would be probably better. Also isn't (x < x+1) a suspicious expression, is it a good idea to mess with it?
November 25, 2014
On Tuesday, 25 November 2014 at 18:24:29 UTC, Kagamin wrote:
> C++ legacy is huge especially in culture. That said, the true issue is in beliefs (which probably stem from 16-bit era). Can't judge Ada, have no experience with it, though examples of Java and .net show how marginal is importance of unsigned types.

Unsigned bytes are important, and I personally tend to make just about everything unsigned when dealing with C-like languages because that makes me aware of the pitfalls and I avoid the signedness issue.

The downside is that it takes extra work to get the evaluation order right and you have to take extra care to make sure loops terminate correctly by being very conscious about +-1 issues when terminating around zero.

But I don't really think C++ legacy is a good reason to keep implicit coercion no matter what programming style one has. Coercion is generally something I try to avoid, even explicitly, so why would I want the compiler to do it with no warning?

> Such optimizations have a bad reputation. If they were more conservative and didn't propagate back in code flow, the situation would be probably better. Also isn't (x < x+1) a suspicious expression, is it a good idea to mess with it?

It is just an example, it could be the result of substituting aliased values.

Anyway, I think it is important to not only define what happens if you add 1 to 0xffffffff, but also define whether that result is considered in correspondence with the type. If it isn't a correct value for the type, then the programmer will have to make no assumptions that optimizations will heed the resulting incorrect value. The only acceptable alternative is to have the language specification explicitly define the type as modular and overflow free. If not you end up with weak typing…?

I personally would take the monotonic optimizations and rather have a separate bit-fidling type that provides a clean builtin swiss-army-knife toolset that gives close to direct access to the whole arsenal that the CPU instruction set provides (carry, ROL/ROR, bitcounts etc).
November 26, 2014
when I migrate dfl codes from x86 to 64 bit,modify the drawing.d ,find the 'offset' and 'index',point(x,y),rect(x,y....),all be keep with the 'lengh's type, so I don't modify them to size_t,only cast(int)length to int,then it's easy to migrate dfl codes to 64 bit.
Ok,then dfl can work  on 64 bit now.

November 27, 2014
On Tuesday, 25 November 2014 at 22:56:50 UTC, Ola Fosheim Grøstad wrote:
> I personally would take the monotonic optimizations and rather have a separate bit-fidling type that provides a clean builtin swiss-army-knife toolset that gives close to direct access to the whole arsenal that the CPU instruction set provides (carry, ROL/ROR, bitcounts etc).

I don't think there's such clear separation that can be expressed in a type, it's more in used coding practices rather than type. You can't change coding practice by introducing a new type.
November 27, 2014
On Thursday, 27 November 2014 at 08:31:24 UTC, Kagamin wrote:
> I don't think there's such clear separation that can be expressed in a type, it's more in used coding practices rather than type. You can't change coding practice by introducing a new type.

You need to separate and define the old types as well as introducing a clean way to do low level manipulation. How to do the latter is not as clear, but…

…regular types should be constrained to convey the intent of the programmer. The intent is conveyed to the compiler and to readers of the source-code. So the type definition should be strict on whether the intent is to convey monotonic qualities or circular/modular qualities.

The C-practice of casting from void* to char* to float to uint to int in order to do bit manipulation leads to badly structured code. Intrinsics also leads to less readable code. There's got to be a better solution to keep "bit hacks" separate from regular code. Maybe a register type that maps onto SIMD registers…
November 27, 2014
Kagamin:

> You can't change coding practice by introducing a new type.

We can try to change coding practice introducing new types :-)

Bye,
bearophile
13 14 15 16 17 18 19 20 21 22 23
Next ›   Last »