October 19, 2014
On Sunday, 19 October 2014 at 09:04:59 UTC, eles wrote:
> That's complicated, to provide another langage for describing the behavior.

I think D need to unify UDA, type-traits, template constraints and other deductive facts and rules into a deductive database in order to make it more pleasant and powerful. And also provide the means to query that database from CTFE code. A commercial compiler could also speed up compilation of large programs with complex compile time logic. (storing facts in a persistent high performance database)

There are several languages to learn from in terms of specifying "reach" in a graph/tree structure. E.g. XQuery

You can view  "@nogc func(){} " as a fact:

nogc('func).

or perhaps:

nogc( ('modulename,'func) ).

then you could list the functions that are nogc in a module using:

nogc( ('modulename,X) )

Same for type traits.

If you build it into the typesystem then you can easily define new type constraints in complex ways.

(You could start with something simple, like specifying if values reachable through a parameter escape the lifetime of the function call.)

> And how? Embedded in the binary library?

The same way you would do it with C/C++ today.

Some binary format allow extra meta-info, so it is possible… in the long term.

> Another idea would be to simply make the in and out contracts of a function exposed in the corresponding .di file, or at least a part of them (we could use "public" for those).

That's an option. Always good to start with something simple, but with an eye for a more generic/powerful/unified solution in the future.

> Anyway, as far as I ca imagine it, it would be like embedding Polyspace inside the compiler and stub functions inside libraries.

Yes, or have a semantic analyser check, provide and collect facts for a deductive database, i.e.:

1. collect properties that are cheap to derive from source, build database

2. CTFE: query property X

3. if database query for X succeeds return result

4. collect properties that are more expensive guided by (2), inject into database

5. return result

> For source code. But for cloused-source libraries?

You need annotations. Or now that you are getting stuff like PNACL, maybe you can have closed source libraries in a IR format that can be analysed.

>> 3. Remove int so that you have to specify the range and make typedefs local to the library
>
> Pascal arrays?

subrange variables:

var
age : 0 ... 150;
year: 1970 ... 9999;


>> Lots of opportunities for improving "state-of-the-art".
>
> True. But a lot of problems too. And there is not much agreement on what is the state of the art...

Right, and it get's worse the less specific the use scenario is.

What should be created is a modular generic specification for a system programming language, based on what is, what should be, hardware trends and theory. Then you can see the dependencies among the various concepts.

C, C++, D, Rust can form a starting point.

I think D2 is to far on it's own trajectory to be modified, so I view D1 and D2 primarily as experiments, which is important too.

But D3 (or some other language) should build on what the existing languages enable and unify concepts so you have something more coherent than C++ and D.

Evolution can only take you so far, then you hit the walls set up by existing features/implementation.
October 19, 2014
On Sunday, 19 October 2014 at 10:22:37 UTC, monarch_dodra wrote:
> Speed: How so?

All kind of situations where you can prove that "expression1 > expression2" holds, but have no bounds on the variables.

> Portability: One issue to keep in mind is that C works on *tons* of hardware. C allows hardware to follow either two's complement, or one's complement. This means that, at best, signed overflow can be implementation defined, but not defined by spec. Unfortunately, it appears C decided to outright go the undefined way.

I think you might be able to make it defined like this:

1. overlflow is illegal and should not limit reasoning about monoticity

2. after overflow accessing a derived result can lead to a value where the overflow either lead to a higher bit representation which was propagated or lead to a value which was truncated.

This is slightly different from "undefined".

:-)

> Correctness: IMO, I'm not even sure. Yeah, use int for numbers, but stick to size_t for indexing. I've seen too many bugs on x64 software when data becomes larger than 4G...

Sure, getting C types right and correct is tedious. The type system does not help you a whole lot. And D and C++ does not make it a lot better. Maybe the implicit conversions is a bad thing.

I machine language there is often no difference between a signed and unsigned instructions which can be handy, but the typedness of multiplication is actually  better than in C languages "u64 mul(u32 a,u32 b)". Multiplication over int is dangerous!

Before compilers got good at optimization I viewed C as an annoying assembler. I assumed wrapping behaviour and wanted an easy way to reinterpret_cast between ints and uints (in C it gets rather ugly).

These days I take the view that programmers should be explicit about "bit-crushing" operations. Maybe even for multiplication. If you are forced to explicitly truncate() when the compiler fails to rule out overflow then the problem areas also become more visible in the source code:

"uint r = a*b/N" might overflow badly even if r is large enough to hold the result.

"uint r = truncate(a*b/N)" makes you aware that you are on thin ice.
October 19, 2014
On Sunday, 19 October 2014 at 10:45:31 UTC, Ola Fosheim Grøstad wrote:
> On Sunday, 19 October 2014 at 09:04:59 UTC, eles wrote:

I mostly agree with all that you are saying, still I am aware that much effort and coordination will be needed. OTOH, this would give D (and/aka the future of computing) a non-negligeable edge (being able to optimize across libraries).


> Some binary format allow extra meta-info, so it is possible… in the long term.

Debug builds could be re-used for that, with some minor modifications, I think.

>> Another idea would be to simply make the in and out contracts of a function exposed in the corresponding .di file, or at least a part of them (we could use "public" for those).
>
> That's an option. Always good to start with something simple, but with an eye for a more generic/powerful/unified solution in the future.


I think it would not turn that bad. For the time being, putting the contracts in the .di files would cost barely nothing (but disk space). And, progressively, the compiler could be made to integrate those, when the .di files with contracts are available, in order to optimize the builds. It would be directly D code, so very easily to interpret. Basically, the optimizer would have the set of the asserts that limit the behaviour of that function at his hand.

Anybody else who would like to comment on this?

> But D3

People here traditionally don't like that word, but it has been unleased several times on the forum. Maybe not that stringent need, but I think that a somewhat disruptive "clean, clarify and fix glitches and bad legacy" release of D(2) is more and more needed and quite accepted as a good thing by the community (which is ready to take the effort to bring code up to date).
October 20, 2014
On 10/19/2014 1:56 AM, Iain Buclaw via Digitalmars-d wrote:
> Good thing that overflow is strictly defined in D then. You can rely on
> overflowing to occur rather than be optimised away.

Yeah, but one has to be careful when using a backend designed for C that it doesn't use the C semantics on that anyway.

(I know the dmd backend does do the D semantics.)

October 20, 2014
On Monday, 20 October 2014 at 06:17:40 UTC, Walter Bright wrote:
> On 10/19/2014 1:56 AM, Iain Buclaw via Digitalmars-d wrote:
>> Good thing that overflow is strictly defined in D then. You can rely on
>> overflowing to occur rather than be optimised away.
>
> Yeah, but one has to be careful when using a backend designed for C that it doesn't use the C semantics on that anyway.


8-I

And here I was hoping that Iain was being ironic!


If you want to support wrapping you could do it like this:

int x = @@wrapcalc( y + DELTA );

And clamping:

int x = @@clampcalc(y+ DELTA);

And overflow

int x  = y+DELTA;
if(x.status!=0){
  x.status.carry…
  x.status.overflow…
}

or

if(@@overflowed( x=a+b+c+d )){
    if(@@overflowed( x=cast(somebigint)a+b+c+d )){
        throw …
    }
}


or

int x = @@throw_on_overflow(a+b+c+d)

1 2 3 4
Next ›   Last »