October 14, 2002
"Sandor Hojtsy" <hojtsy@index.hu> wrote in message news:aoe1tg$2pe5$1@digitaldaemon.com...
>
> "Mark Evans" <Mark_member@pathlink.com> wrote in message news:aoa2uk$1v2k$1@digitaldaemon.com...
> > Walter wrote:
> >
> > >IMHO : it would be nice to have a language defined by its semantics and
not
> > >its implementation.
> >
> > Exactly the point I keep trying to make,
>
> I also agree.

Me too

> I still think it would be usefull to make a detailed comparsion to C++ parameter passing semantics.
>
> 1 - Primitive types
> ------------------
>
> 1.1 - pass by value
> in C++:   void fn(int a);
> in D:       void fn(in int a);
>
> 1.2 - pass by value, immutable
> in C++:   void fn(const int a);
> in D:       ABSENT

I think this is wrong, and I think that "in" value parameters should be equivalent to C++'s const value parameters

1.1 - pass by value
in C++:   void fn(int a);
in D:       ABSENT

1.2 - pass by value, immutable
in C++:   void fn(const int a);
in D:       void fn(in int a);

> 1.3 - pass by reference/address
> in C++:   void fn(int &a);
> in D:       void fn(out int a); / void fn(inout int a);
>
> 1.4 - pass by reference/address, immutable
> in C++:  void fn(const int &a);
> in D:       ABSENT
> This is not the same as 1.2, from the semantical view. I can provide
example
> if needed.

I think we need one of these.  Although

1.4 - pass by reference/address, immutable
in C++:  void fn(const int &a);
in D:       void fn(in int a);

with a type more complex than int, the compiler would be free to pass by reference so long as in parameters are not mutable.

> 1.5 - pass by copy/copyback
> in C++:   ABSENT
> in D:       ABSENT
>
> 2 - Objects
> ------------
>
> 2.1 - pass by value
> in C++:  void fn(Object a);
> in D:      ABSENT
>
> 2.2 - pass by value, immutable
> in C++:  void fn(const Object a);
> in D:   ABSENT
>
> 2.3 - pass by reference/address
> in C++: void fn(Object &a);
> in D:      void fn(in Object a);
>
> 2.4 - pass by reference/addess, immutable
> int C++: void fn(const Object &a);
> in D:     ABSENT
>
> 2.5 - pass by copy/copyback
> in C++:  ABSENT
> in D:      ABSENT
>
> 2.6 - pass by reference/address of reference/address
> in C++:   void fn(Object *&a);
> in D:       void fn(out Object a); / void fn(inout Object a)
>
> 2.7 - pass by reference/address of reference/addess, immutable
> in C++:   void fn(const Object *&a);
> in D:       ABSENT
>
> 3 - Arrays
> -----------
>
> 3.1 - pass by value of the items (copy)
> in C++:  void fn(vector<int> a);
> in D:       ABSENT
>
> 3.2 - pass by value of the items (copy), immutable
> in C++:  void fn(const vector<int> a);
> in D:      ABSENT
>
> 3.3 - pass by reference/addess to the items
> in C++:  void fn(int *a, int len);
> in D:       void fn(in int[] a);
> Neither of them is elegant. You cannot resize the array, but you can
modify
> the items.
>
> 3.4 - pass by reference/address to the items, immutable items
> in C++: void fn(const int *a, int len);
> in D:  ABSENT
>
> 3.5 - pass by copy/copyback
> in C++:  ABSENT
> in D:      ABSENT
>
> 3.6 - pass by reference/address of reference/address of the items
> in C++:  void fn(vector<int> &a);
> in D:      void fn(out int[] a) / void fn(inout int[] a)
>
> 2.7 - pass by reference/address of reference/addess of the items,
immutable
> items
> in C++:  void fn(vector<int> &a);
> in D:      ABSENT
>
>
> Conclusion:
> - Neither language provides copy/copyback.
> - D provides pass-by-value semantics only for primitive types.
> - D does not provide immutable parameter semantics.
> - D extends, and separates the concept of pass-by-reference to "out" and
> "inout" parameters
>
> Sandor

I think immutable parameters are important (you can't pass const data to
anything that's mutable).

You need to be able to ensure that immutability covers the entire object, all its members, what they point to, etc.  Otherwise it makes holes that bugs can sneak out of.

Sean


October 14, 2002
Sean wrote
>You need to be able to ensure that immutability covers the entire object, all its members, what they point to, etc.

Yes I agree 100% -- that's a proper contract definition which will kill many bugs.


Sandor wrote
> > I find this an exceptionally confusing way to define 'in' and not a very useful one
> They are usefull. Thousands of Java developers can live with it. But IMHO they can be made better. Especially concerning passing arrays.

Any "contract" whose specification is a set of implementation details is useless from a design-by-contract standpoint.  Here's the only specification we currently have for the 'in' contract:  "big objects use references while little objects don't, at the whim of the compiler design."  That's not any kind of contract I care to use.  (The Java analogy is also completely broken.)

We're losing sight of what contracts are meant to do.  The idea of design-by-contract, as a theory, is to codify what non-DBC languages implement as "coding conventions."  Instead of focusing on implementation details, we should first ask what conventions D wishes to embody.  Otherwise we just end up with a new syntax for doing C++ instead of a contract-based language.

My proposal is that this problem can be analyzed along a small handful of orthogonal dimensions.  Mutability, information flow direction, ownership, perhaps a few others.

Not all languages sporting keywords that hint at design-by-contract underpinnings actually have them.  (I'm even a bit worried about D ever becoming a DBC language.)  C++ has 'const' but that's all.  Sandor mentioned Java. Walter has also mentioned IDL as an inspiration for in/out/inout.  None of these languages is design-by-contract.  Perhaps the only real DBC language is Eiffel. Here's some supporting data:

"... fundamental differences between the Java and Eiffel object models [include]
.. design by contract vs. wishful thinking ..."
from URL:
ftp://rtfm.mit.edu/pub/usenet/news.answers/eiffel-faq

"It is regrettable that this lesson [design-by-contract] has not been heeded by such recent designs as Java (which added insult to injury by removing the modest assert instruction of C!), IDL (the Interface Definition Language of CORBA, which is intended to foster large-scale reuse across networks, but fails to provide any semantic specification mechanism), Ada 95 and ActiveX.  For reuse to be effective, Design by Contract is a requirement. Without a precise specification attached to each reusable component -- precondition, postcondition, invariant -- no one can trust a supposedly reusable component." from URL: http://archive.eiffel.com/doc/manuals/technology/contract/ariane/page.html

"Interface Definition Languages as we know them today are doomed."
[Should D then bother with interfaces at all, or use in/out/inout from IDL?]
from a father of design-by-contract at URL:
http://archive.eiffel.com/doc/manuals/technology/bmarticles/sd/contracts.html

Java and IDL require add-on libraries to do proper design-by-contract, e.g. http://www.javaworld.com/javaworld/jw-02-2001/jw-0216-cooltools.html http://www.reliable-systems.com/tools/iContract/iContract.htm http://www.javaworld.com/javaworld/jw-02-2002/jw-0215-dbcproxy.html http://citeseer.nj.nec.com/40586.html http://www.cse.iitb.ernet.in/~rkj/COMContracts.ps.gz

I'm no design-by-contract expert but I admire the whole concept and hope that D will indeed live up to it.  I do not see C++, Java, or IDL offering anything truly substantial in this direction.  Eiffel would be a better source of inspiration.  From the Eiffel FAQ:

ftp://rtfm.mit.edu/pub/usenet/news.answers/eiffel-faq

"Eiffel is a pure, statically typed, object-oriented language. Its modularity is based on classes. Its most notable feature is probably design by contract. It brings design and programming closer together. It encourages maintainability and the re-use of software components.

"Eiffel offers classes, multiple inheritance, polymorphism, static typing and dynamic binding, genericity (constrained and unconstrained), a disciplined exception mechanism, systematic use of assertions to promote programming by contract.

"Eiffel has an elegant design and programming style, and is easy to learn.

"An overview is available at http://www.eiffel.com/doc/manuals/language/intro/"

Mark


October 15, 2002
"Sean L. Palmer" <seanpalmer@directvinternet.com> wrote in message news:aoeu4k$nc2$1@digitaldaemon.com...
>
> "Sandor Hojtsy" <hojtsy@index.hu> wrote in message news:aoe1tg$2pe5$1@digitaldaemon.com...
> >
> > "Mark Evans" <Mark_member@pathlink.com> wrote in message news:aoa2uk$1v2k$1@digitaldaemon.com...
> > > Walter wrote:
> > >
> > > >IMHO : it would be nice to have a language defined by its semantics
and
> not
> > > >its implementation.
> > >
> > > Exactly the point I keep trying to make,
> >
> > I also agree.
>
> Me too
>
> > I still think it would be usefull to make a detailed comparsion to C++ parameter passing semantics.
> >
> > 1 - Primitive types
> > ------------------
> >
> > 1.1 - pass by value
> > in C++:   void fn(int a);
> > in D:       void fn(in int a);
> >
> > 1.2 - pass by value, immutable
> > in C++:   void fn(const int a);
> > in D:       ABSENT
>
> I think this is wrong, and I think that "in" value parameters should be equivalent to C++'s const value parameters


For primitive types, what use would that have?


> 1.1 - pass by value
> in C++:   void fn(int a);
> in D:       ABSENT
>
> 1.2 - pass by value, immutable
> in C++:   void fn(const int a);
> in D:       void fn(in int a);
>
> > 1.3 - pass by reference/address
> > in C++:   void fn(int &a);
> > in D:       void fn(out int a); / void fn(inout int a);
> >
> > 1.4 - pass by reference/address, immutable
> > in C++:  void fn(const int &a);
> > in D:       ABSENT
> > This is not the same as 1.2, from the semantical view. I can provide
> example
> > if needed.
>
> I think we need one of these.  Although

A C++ example:

const int *p;
void fn(const int a)
{
  p = &a;
}
void fn2(const int &a)
{
  p = &a;
}
These functions have different results, so 1.4 is not the same as 1.2.


> 1.4 - pass by reference/address, immutable
> in C++:  void fn(const int &a);
> in D:       void fn(in int a);
>
> with a type more complex than int, the compiler would be free to pass by reference so long as in parameters are not mutable.

But that would have different side effects.

> > 1.5 - pass by copy/copyback
> > in C++:   ABSENT
> > in D:       ABSENT
> >
> > 2 - Objects
> > ------------
> >
> > 2.1 - pass by value
> > in C++:  void fn(Object a);
> > in D:      ABSENT
> >
> > 2.2 - pass by value, immutable
> > in C++:  void fn(const Object a);
> > in D:   ABSENT
> >
> > 2.3 - pass by reference/address
> > in C++: void fn(Object &a);
> > in D:      void fn(in Object a);
> >
> > 2.4 - pass by reference/addess, immutable
> > int C++: void fn(const Object &a);
> > in D:     ABSENT
> >
> > 2.5 - pass by copy/copyback
> > in C++:  ABSENT
> > in D:      ABSENT
> >
> > 2.6 - pass by reference/address of reference/address
> > in C++:   void fn(Object *&a);
> > in D:       void fn(out Object a); / void fn(inout Object a)
> >
> > 2.7 - pass by reference/address of reference/addess, immutable
> > in C++:   void fn(const Object *&a);
> > in D:       ABSENT
> >
> > 3 - Arrays
> > -----------
> >
> > 3.1 - pass by value of the items (copy)
> > in C++:  void fn(vector<int> a);
> > in D:       ABSENT
> >
> > 3.2 - pass by value of the items (copy), immutable
> > in C++:  void fn(const vector<int> a);
> > in D:      ABSENT
> >
> > 3.3 - pass by reference/addess to the items
> > in C++:  void fn(int *a, int len);
> > in D:       void fn(in int[] a);
> > Neither of them is elegant. You cannot resize the array, but you can
> modify
> > the items.
> >
> > 3.4 - pass by reference/address to the items, immutable items
> > in C++: void fn(const int *a, int len);
> > in D:  ABSENT
> >
> > 3.5 - pass by copy/copyback
> > in C++:  ABSENT
> > in D:      ABSENT
> >
> > 3.6 - pass by reference/address of reference/address of the items
> > in C++:  void fn(vector<int> &a);
> > in D:      void fn(out int[] a) / void fn(inout int[] a)
> >
> > 2.7 - pass by reference/address of reference/addess of the items,
> immutable
> > items
> > in C++:  void fn(vector<int> &a);
> > in D:      ABSENT
> >
> >
> > Conclusion:
> > - Neither language provides copy/copyback.
> > - D provides pass-by-value semantics only for primitive types.
> > - D does not provide immutable parameter semantics.
> > - D extends, and separates the concept of pass-by-reference to "out" and
> > "inout" parameters
> >
> > Sandor
>
> I think immutable parameters are important (you can't pass const data to
> anything that's mutable).

If a copy is passed it is not a problem. But with D's reference-only object passing concept, you can't always pass copies. So there is an even greater need for const parameters.

> You need to be able to ensure that immutability covers the entire object, all its members,

Yes.

> what they point to, etc.

No.

> Otherwise it makes holes that bugs can sneak out of.

Const parameters would be usefull not only to ease bug-free coding, but to increase the expressive power, and help self-documentation.

Sandor



October 15, 2002
What contract semantics do you wish to implement?  If none, then what does interest you?  Making every possible calling convention available in D?  Perhaps C++ has too many calling permutations; have you thought of that?

Walter is right that 'const' was a C++ feature that never worked out.  Not because contracts are bad, but because 'const' is a poor man's version of design-by-contract.  Immutability is important, but to say that we want 'const,' without tying it into contracts, is to mire D in the mistakes of C++.

I would rather define a contract for the call, and have the compiler figure out which calling convention makes sense, under that contract, for that particular data type.  The important thing is semantic consistency of the contract across all data types.  We want a compiler that does more for us than a C++ compiler. (Otherwise we'd just use C++.)

If we think at the lower level of charting C++ call permutations, then we are not really designing by contract.  We're just doing C++ with new syntax, "contract paint" if you will.

To make that remark explicit, I'd like someone to explain how any of these comparisons shed light on my original idea about transfer contracts.  I think we've lost the ball.

Mark


In article <aogjvn$2cm7$1@digitaldaemon.com>, Sandor Hojtsy says...
>
>"Sean L. Palmer" <seanpalmer@directvinternet.com> wrote in message news:aoeu4k$nc2$1@digitaldaemon.com...
>>
>> "Sandor Hojtsy" <hojtsy@index.hu> wrote in message news:aoe1tg$2pe5$1@digitaldaemon.com...
>> >
>> > "Mark Evans" <Mark_member@pathlink.com> wrote in message news:aoa2uk$1v2k$1@digitaldaemon.com...
>> > > Walter wrote:
>> > >
>> > > >IMHO : it would be nice to have a language defined by its semantics
>and
>> not
>> > > >its implementation.
>> > >
>> > > Exactly the point I keep trying to make,
>> >
>> > I also agree.
>>
>> Me too
>>
>> > I still think it would be usefull to make a detailed comparsion to C++ parameter passing semantics.
>> >


October 16, 2002
"Mark Evans" <Mark_member@pathlink.com> wrote in message news:aohpat$hdb$1@digitaldaemon.com...
> What contract semantics do you wish to implement?

Pass by reference.
Pass by reference to inout value.
Pass by reference to out value.
Pass by reference to immutable.

You think, these are implementation details, which could and therefore should be hidden from the user. But I think these are *semantic* concepts, and I also don't care how they are implemented, while they provide the semantics. And well these semantics could not be hidden from users after all.

> If none, then what does
> interest you?  Making every possible calling convention available in D?

No.

> Perhaps C++ has too many calling permutations; have you thought of that?

I don't think C++ has too many of that. But if you don't like a passing convetion, you can avoid it, and use only a limited subset - a dumb C++.

> Walter is right that 'const' was a C++ feature that never worked out.  Not because contracts are bad, but because 'const' is a poor man's version of design-by-contract.  Immutability is important, but to say that we want
'const,'
> without tying it into contracts, is to mire D in the mistakes of C++.

I used 'const' in C++ with success. Can you provide an example where the
'const' is misused and/or part of bad design?
From the contract point of view: 'const' is a contract that the called
function will not change the object. Whereas the 'in' specifies that changes
to the passed/original object will not be incorporated into the
original/passed object. They specify distinct semantic details. I don't see
the problem. Mark have written "To me 'in' means that data is immutable by
the callee". But that is what 'const' means. And you still need to specify
*what* data? The reference or the referred?

> I would rather define a contract for the call, and have the compiler
figure out
> which calling convention makes sense, under that contract, for that
particular
> data type.

In some functions you have to and will,
1) store the adress of, or reference to, the passed object.
2) make use of the *semantic* rule, that changes to the object are (or are
not) immediately incorporated into the original object, and vica-versa.
What would the compiler do in those situations? Undefined Behaviour?

> The important thing is semantic consistency of the contract across all data types.  We want a compiler that does more for us than a C++
compiler.
> (Otherwise we'd just use C++.)

IMHO, in the current parameter passing conventions, it does less than a C++ compiler.

> If we think at the lower level of charting C++ call permutations, then we
are
> not really designing by contract.

C++ call convention syntax was borne out of semantic need. I was trying to demonstrate the semantics that D has no syntax/contract for.

> We're just doing C++ with new syntax, "contract paint" if you will.

C++ syntax is not important, I can leave it behind. D already has semantics that C++ doesn't. But behind the C++ syntax there lies some more semantics that D lacks, and needs.

> To make that remark explicit, I'd like someone to explain how any of these comparisons shed light on my original idea about transfer contracts.  I
think
> we've lost the ball.

Hmm. I started this thread with subject: "Re: passing arrays as "in" parameter with suprising results". With the particular example of array passing as "in", I wanted to point out weak points in the current D parameter passing docs, semantics and syntax. Then you have started a sub-thread about ownership-transfer through parameter passing. I found some interesting general opinions about parameter passing (such as the semantic meaning of 'in', which is not specified in the docs), and replied with detailing my opinion on this subject. I think I still have my ball.

Sandor



November 05, 2002
"Mike Wynn" <mike.wynn@l8night.co.uk> wrote in message news:ao8u15$raq$1@digitaldaemon.com...
> IMHO : it would be nice to have a language defined by its semantics and
not
> its implementation.

Yes, but then a large risk is run of having some nitpick in the semantics putting huge burdens on the resulting code (such as Java's 64 bit floating point precision that just doesn't sit right with the Intel FPU).


November 06, 2002
"Walter" <walter@digitalmars.com> wrote in message news:aq915r$2iln$2@digitaldaemon.com...
>
> "Mike Wynn" <mike.wynn@l8night.co.uk> wrote in message news:ao8u15$raq$1@digitaldaemon.com...
> > IMHO : it would be nice to have a language defined by its semantics and
> not
> > its implementation.
>
> Yes, but then a large risk is run of having some nitpick in the semantics putting huge burdens on the resulting code (such as Java's 64 bit floating point precision that just doesn't sit right with the Intel FPU).
>
and Java's float/double semantics have changed because of that, but I'm not
convinced that it was the right thing to do, especially with Java which is
intended as "write once run anywhere" (tm) once you allow float and double
operations to be performed at precisions greater than float or double you
code may give different results on different architectures,
in C an int are defined as the most efficient size for the platform , and a
short is no longer than an int, and a long no shorter than a int, many
languages and systems (Java,C#, JavaVM, intent, clr ) have fixed the sizes
to short 16bits, int 32 and long to 64 bits,
intent like Java has IEEE 754 32 bit floats and 64 bit doubles (and intent
also has a 16:16 fixed)
(can't find any info on intermediate values)
I believe that systems such as intent (www.tao.co.uk) and Java have shown
their worth on emedded systems because they do has such strict semantics. As
a developer you know not only will you code run the same on a different
system, but will run on tomorrows and not just todays architecures.
this is also true for desktop systems, PS2 have Linux, will D ever run on
that or Cobalt Qubes, Netwinders, PowerMacs or Sun Sparc boxes ?

I believe that the "cost" of imposing strict semantics is outweighted by the benifits.

Will D support the AMD x86-64 architecture ? which supports IEEE-754 floating point numbers




November 06, 2002
Somehow, it appears that this thread has gotten away from "Array ownership", so I renamed my post...

I believe that both answers are right.  Even more than that, I believe that if you restrict yourself to either answer alone, it is wrong.  Sometimes it doesn't matter how fast you can calculate the answer, because if you get different answers on different machines then the answer is "wrong".  For this kind of problem, you need rigidly defined types, that follow a specific behavior on all platforms.  Sometimes it doesn't matter how correct your answer is, because if you can't calculate it fast enough then the answer is "wrong" (games are a strong contender here -- the answer only has to be "good enough", but it definitely has to be "fast enough").  For these applications, you need native support.

I think the intrinsic types should contain things like:

fast16 - the fastest type of native integral value that holds at least 16 bits tight16 - the smallest type of native integral value that holds at least 16 bits exact16 - an exactly 16 bit integer, even if it requires software munging (useful for HW overlays, file structures, network protocols -- painfully slow, but sometimes you don't care)

To ease compiler implementation, not all compilers would have to implement all versions.  I suspect that it could rapidly become a differentiating feature, once multiple compilers were available for a platform.  I think I would require the 'fast' and 'tight' versions, since all they require is a choice at the time the compiler is written -- no additional code is necessary.  'exact' would require additional coding inside the compiler, so it might be an optional feature.  (Having said all of that, I *HATE* optional features, because somehow they never seem to get implemented...)

I would actually prefer:
fast<16>, tight<16>, and so on.  That way, I can smoothly progress from built in
types to bignum routines, with no change to my syntax except to pick a bigger
number of bits.  And if I have to use numbers that currently require bignums
(say, a fast<200>), but somebody eventually comes out with a 256bit CPU, maybe
my code speeds up by an additional order of magnitude (above the clock and other
architectural speedups), without a rewrite.

For floats, I'm not sure what syntax would be better.  Maybe something like:
float<32,native>
float<64,ieee754> /* if that is even meaningful -- pardon my ignorance of the
754 spec */
complex<80,native> /* native if possible, simulated if necessary */

Again, this would allow me to specify any precision I wanted, potentially incurring software routine penalties on some machines, but being able to run natively on others.  If I want native performance more than specific answers (a physics engine for games, versus a physics engine for engineering, for example) then I can specify that I want "native" floating point support.

I'm not trying to pick a particular syntax (although I definitely prefer parameterized types over any finite list of hardcoded type names).  I'm just trying to say that both concerns are valid, and I would *REALLY* like to see a language that could support both needs by giving the programmer final control over how his or her program behaves, for every single variable.

And as for the syntax being readable by C programmers -- C99 already has something similar to this.  They use the "finite list of hardcoded type names" that irritates me, but they have realized that you need both portability and performance, and that the programmer has to make the decision, not the compiler.


Let me clarify that last statement -- the compiler can tell which variables are heavily used, and if it is told that the system should be optimized for speed, then the compiler knows a *LOT* about how to tweak out the platform.  What the compiler does *not* know, however, is the larger context of the programmer's work.  It may be more important for the programmer to get certain precise results -- maybe because the code is going in a homogenous cluster, or maybe because it is having to calculate some type of checksum that relies on oddities of a particular size of integer, or maybe something entirely else.  But the compiler has no way to know that, unless you provide a syntax for specifying the difference between "fast behavior" and "exact behavior".  And once you've done that, "tight behavior" is sort of a middle ground -- not as fast as "fast", and not as exact as "exact", but still useful for systems that are both time *and* space constrained.

To continue with the readability issue : every C/C++ programmer who has worked on large, long-term projects, has ended up either making or using someone else's giant list of #ifdef'd typedefs that constructs a list of INT16, UINT32, etc. And while you *could* do it this way in D, why?  That just ends up with 20 different sets of definitions, which makes it a pain in the behind to bring multiple libraries together.  Each library may define its own "magic" types, and then the integrator is left with code that looks schizophrenic because it can't make up its mind which set of types to use.  If the language gave a clear and flexible typing system, this would not be an issue -- no one would make the typedefs, because they wouldn't be needed.  And if the parameterized types were the only way to get integral types, it would never even occur to anyone to think about typedef'ing types.  fast<16> isn't that much harder to type than INT16. If the <> is annoying, the actual syntax *could* just be fast16, fast32, fast736, etc.  The compiler *could* work it out.  But the <> would make it a bit more obvious that parameterization was taking place.

Just my 2 cents, which always seems to cost several dollars for me to say... Mac

P.S. Since it's been so long since you saw my original point, let me reiterate it for a bit of focus: "I believe that both answers (fast native types and potentially slow exact semantic types) are right.  Even more than that, I believe that if you restrict yourself to either answer alone, it is wrong."


In article <aqbaa0$1sq7$1@digitaldaemon.com>, Mike Wynn says...
>
>
>"Walter" <walter@digitalmars.com> wrote in message news:aq915r$2iln$2@digitaldaemon.com...
>>
>> "Mike Wynn" <mike.wynn@l8night.co.uk> wrote in message news:ao8u15$raq$1@digitaldaemon.com...
>> > IMHO : it would be nice to have a language defined by its semantics and
>> not
>> > its implementation.
>>
>> Yes, but then a large risk is run of having some nitpick in the semantics putting huge burdens on the resulting code (such as Java's 64 bit floating point precision that just doesn't sit right with the Intel FPU).
>>
>and Java's float/double semantics have changed because of that, but I'm not
>convinced that it was the right thing to do, especially with Java which is
>intended as "write once run anywhere" (tm) once you allow float and double
>operations to be performed at precisions greater than float or double you
>code may give different results on different architectures,
>in C an int are defined as the most efficient size for the platform , and a
>short is no longer than an int, and a long no shorter than a int, many
>languages and systems (Java,C#, JavaVM, intent, clr ) have fixed the sizes
>to short 16bits, int 32 and long to 64 bits,
>intent like Java has IEEE 754 32 bit floats and 64 bit doubles (and intent
>also has a 16:16 fixed)
>(can't find any info on intermediate values)
>I believe that systems such as intent (www.tao.co.uk) and Java have shown
>their worth on emedded systems because they do has such strict semantics. As
>a developer you know not only will you code run the same on a different
>system, but will run on tomorrows and not just todays architecures.
>this is also true for desktop systems, PS2 have Linux, will D ever run on
>that or Cobalt Qubes, Netwinders, PowerMacs or Sun Sparc boxes ?
>
>I believe that the "cost" of imposing strict semantics is outweighted by the benifits.
>
>Will D support the AMD x86-64 architecture ? which supports IEEE-754 floating point numbers
>
>
>
>


November 06, 2002
"Mike Wynn" <mike.wynn@l8night.co.uk> wrote in message news:aqbaa0$1sq7$1@digitaldaemon.com...
> and Java's float/double semantics have changed because of that,

I didn't know that.

> I believe that systems such as intent (www.tao.co.uk) and Java have shown their worth on emedded systems because they do has such strict semantics.

I'm not convinced that Java's success on embedded systems is due to strict semantics. I don't have much good information on why it has succeeded in that environment - one possibility is the Java bytecode is very compact, and so one can cram a lot of functionality into add-in modules (given the existence of the VM on the base unit). Java's resistance to crashing also has a lot of appeal in embedded systems.

> As
> a developer you know not only will you code run the same on a different
> system, but will run on tomorrows and not just todays architecures.

I know many Java developers who find the reverse is true - they cannot predict which VM will be running their code, and different VMs have different implementations, behavior, and bugs, and so it becomes an impossible task to write code that actually will run everywhere.

> this is also true for desktop systems, PS2 have Linux, will D ever run on that or Cobalt Qubes, Netwinders, PowerMacs or Sun Sparc boxes ?

D running on those systems is gated by there being a D compiler for them, just as Java is gated by there being a Java VM on them. The Java VM itself requires a lot of work to port. Once D is married to the GCC code generator, it should be far easier to support a new system for D than for Java.

> I believe that the "cost" of imposing strict semantics is outweighted by
the
> benifits.

For many purposes, yes, but D will have bendable semantics so that it can be efficiently implemented on a wide variety of machines. But not quite as bendable as C is (no one's complement!).

> Will D support the AMD x86-64 architecture ? which supports IEEE-754 floating point numbers

There is no barrier to D supporting it other than time & effort to retarget the compiler. That's why the GCC version of D is so important, it opens the door to all those other processors and systems.


November 07, 2002
"Walter" <walter@digitalmars.com> wrote in message news:aqbppn$2djv$1@digitaldaemon.com...
>
> "Mike Wynn" <mike.wynn@l8night.co.uk> wrote in message news:aqbaa0$1sq7$1@digitaldaemon.com...
> > and Java's float/double semantics have changed because of that,
>
> I didn't know that.
>
http://java.sun.com/products/jdk/1.2/previous-changes.html#a0

(basically allows a VM to perform intermediate float/double ops at better
than original precission)

> > I believe that systems such as intent (www.tao.co.uk) and Java have
shown
> > their worth on emedded systems because they do has such strict
semantics.
>
> I'm not convinced that Java's success on embedded systems is due to strict semantics. I don't have much good information on why it has succeeded in that environment - one possibility is the Java bytecode is very compact,
and
> so one can cram a lot of functionality into add-in modules (given the existence of the VM on the base unit). Java's resistance to crashing also has a lot of appeal in embedded systems.
>
if you look at the embedded Java VM's they are all different some
interpreters, some just in time and dynamic compilers and some are ahead of
time compilers. so compact bytecode is only part of the reason for use,
class loading and unloading may be another, and the ability to combine ahead
of time compiled code with dynamically loaded code another, but all of this
is only possible because there are strict rules imposed on the systems.
Java may be a bad language to compare D with, because many people merge the
Java Language and JavaVM together, they are infact completly separate
entities, the VM has only been changed a little (subtle changes to
invoke-special) whilst the language has been enhanced to allow inner classes
and other features (all syntactic sugar)
My comments where more about Java the language, which imposes strict
semantic behaviour (much, it is true is inherited from the VM's behaviour)
such as when Class loading and static initialisers are run, the order in
which expressions are evaluated and the atomicity of operations (although
some actions such as writing non volatile doubles is a VM specific
operation) but the Java Language can be ahead of time compiled and still
conform to Sun's Java Spec (without going to Java Bytecode first).


> > As
> > a developer you know not only will you code run the same on a different
> > system, but will run on tomorrows and not just todays architecures.
>
> I know many Java developers who find the reverse is true - they cannot predict which VM will be running their code, and different VMs have different implementations, behavior, and bugs, and so it becomes an impossible task to write code that actually will run everywhere.
>
I know Sun's various "Platforms" causes a few headaches, J2ME, KVM etc
but on desktop Java I've only every run fowl of deployment problems and
1.1.8 to 1.2.x compatibility issues when using Sun or MS JDK's.
I would not consider bugs in a VM a valid argument against semantics over
implementation.
as for different behaviours that is agreeing with my argument for D to be
defined by its semantics and not its implementation.
isn't the fact that Java, Sun flagship for crossplatform compatibility is
not quite what it seems a good reason to make D defined by its semantics,
and show developers that they can have a language that will perform the same
on any supported platform (within the limits of the platform, I can't see an
MP3 player running on a GBA working at full speed).
I find it odd that different VM's caused problems, for a Java VM to be
allowed to use the Java name it must pass Sun's TCK (a huge Java test suite)
http://developer.java.sun.com/developer/technicalArticles/JCPtools/
and not wanting Sun's lawers to come knocking on my door, I'll just say its
not perfect :) I have heard many complaints that the JavaVM spec was written
FROM the source code and not the other way round. the bytecode verifier spec
reads exactly as if it is a writeup of someones code.

> > this is also true for desktop systems, PS2 have Linux, will D ever run
on
> > that or Cobalt Qubes, Netwinders, PowerMacs or Sun Sparc boxes ?
>
> D running on those systems is gated by there being a D compiler for them, just as Java is gated by there being a Java VM on them. The Java VM itself requires a lot of work to port. Once D is married to the GCC code
generator,
> it should be far easier to support a new system for D than for Java.
so D will not offer any better cross-platform support than C, C++

>
> > I believe that the "cost" of imposing strict semantics is outweighted by
> the
> > benifits.
>
> For many purposes, yes, but D will have bendable semantics so that it can
be
> efficiently implemented on a wide variety of machines. But not quite as bendable as C is (no one's complement!).
>

I find it interesting that you oppose D having strict well defined semantics
and yet on the D overview you say
it is aimed at "Numerical programmers" and implement lots of floating point
Nan and infinity behavours, but are unwilling to fix the float and doubles
to defined standards. are these behaviours part of the "bendability" ?

I've been reading and rereading the D overview to try to gain and
understanding for why you might oppose semantics over implementation, and
apart from low down and dirty programming, which is not that effected (you
do need a integer that can hold a pointer) I see nothing there, in fact much
of what I read says to me D will be defined by its semantics.
(I assume it's a bit out of date as it also says all objects are in the
heap, but I thought RAII put objects onto the stack ?)

I believe that if you asked the programmers who fit into the "who is D for" list then 95% would prefer D to be the same D on every supported platform and to be freed from the implementation, and the expense of some platforms being harder to support than others.