February 28, 2005
Walter wrote:

> Kris mentions slicing, that's just part of it. Slicing, *coupled with
> garbage collection* makes for a big win. Not having to keep track of
> ownership of memory means one can be much more flexible in designing and
> manipulating data structures. The need for copy constructors, move
> constructors, overloaded assignment operators, pretty much goes away. So not
> only is the code easier to write, there's much less that has to be executed.
> Ergo, faster.

Yeah, if Mango could be even compiled on Mac OS X using GDC that is...

I'm sure the car's speedy, if only she started ?

> I'll go out on a limb and say that there's no way Matthew is going to be
> able to beat D in the wc benchmark without writing a very ugly, kludgy piece
> of C++ hackery.

Or perhaps by using vector instructions or just different compilers :-)

I don't like C++ much, but last I checked you could still use it for C.

--anders
February 28, 2005

Matthew wrote:
> "Georg Wrede" <georg.wrede@nospam.org> wrote in message news:422311B7.3010701@nospam.org...
> 
>>
>>Matthew wrote:
>>
>>>But the example itself is bogus. Let's look at it again, with a bit more flesh:
>>>
>>>    byte b;
>>>
>>>    b = 255
>>>
>>>    b = b + 1;
>>>
>>>Hmmm .... do we still want that implicit behaviour? I think not!!
>>
>>int big;
>>big = A_BIG_LITERAL_VALUE;
>>big = big + 1; // or any value at all, actually.
>>
>>Where's the difference?
> 
> 
> In principal none.
> 
> In practice:
>     - the larger a type, the less likely it is to overflow with expressions involving literals
>     - the larger a type, the fewer types there are larger than it
>     - there has to be some line drawn.

So we swipe it under the rug.

>>Heck, any integral type might overflow.
> True. There are limits to all principles, and imperfections to all notions. However, saying all integral=>integral conversions are *by definition* valid is not the pragmatic answer to our theoretical limitations. It's ostrich stuff.

I was suggesting the opposite.

>>(What I'd love is a switch for non-release code to check for overflow!)
> 
> Why non-release?

For practical matters, and speed.
March 01, 2005
"Kris" <Kris_member@pathlink.com> wrote in message news:cvvuhc$hqd$1@digitaldaemon.com...
> In article <cvvp0e$b66$1@digitaldaemon.com>, Walter says...
> >CPUs are designed to execute C semantics
> >efficiently. That pretty much nails down accepting C's operator
precedence
> >and default integral promotion rules as is.
> Misleading and invalid.

Correct, but that's because much of the context is omitted. I wrote:

"D, being derived from C and C++ and being designed to appeal to those programmers, is designed to try hard to not subtly break code that looks the same in both languages. CPUs are designed to execute C semantics efficiently. That pretty much nails down accepting C's operator precedence and default integral promotion rules as is."

The second sentence just adds to the first, it does not stand alone. Where D does break from C/C++ is in things that look different, or that will issue error messages if the C syntax is tried.

Many older CPUs had special instructions in them that catered to Pascal compilers. With the decline of Pascal and the rise of C, there was a change to support C better and drop Pascal. No CPU designer expects to stay in business if their new design does not execute compiled C efficiently. Intel added special instructions to support C integral promotion rules, and made darn sure they executed efficiently. If you carefully read their optimization manuals, they don't give much attention to the performance of integer operations that are not normally generated by C compilers.

Changes in compiler code generation techniques led to a complete redesign of the FPU instructions.

Even Java was forced to change their semantics to allow for the way Intel FPUs operated. And if you follow Java, you'll know how resistant Sun is to making any changes in the VM's semantics, it has to be a huge issue.

Were that D was big enough in the market to dictate CPU design, but it isn't. C is, but even Java isn't. As such, it just makes sense to work with what works efficiently. I honestly believe that if D gets a reputation of being less efficient than C/C++, it will be dead in the marketplace. Look at all the drubbing Java gets over this issue, and the enormous resources Sun/IBM/etc. pour into trying to fix it.

I believe Ada does what you wish (been a long time since I looked at it), but it just isn't very successful.


March 01, 2005
On Tue, 1 Mar 2005 07:06:26 +1100, Matthew wrote:


[snip]

> 
> The reasons I choose C++ over D are nothing to do with speed. That was kind of the point of my whole rant. It concerns me a little that you respond relatively voluminously to the perceived slight on speed, but not about my concerns about D's usability and robustness.
> 
> I believe that software engineering has a pretty much linear order of concerns:
> 
>     Correctness           - does the chosen
> language/libraries/techniques promote the software actually do the right
> thing?
>     Robustness            - does the language/libraries/techniques
> support writing software that can cope with erroneous environmental
> conditions _and_ with design violations (CP)
>     Efficiency              - does the language/libraries/techniques
> support writing software that can perform with all possible speed (in
> circumstances where that's a factor)
>     Maintainability       - does the language/libraries/techniques
> support writing software that is maintainable
>     Reusability            - does the language/libraries/techniques
> support writing software that may be reused
> 
> As I said, I think that, in most cases, these concerns are in descending order as shown.
> 
> In other words, Correctness is more important than Robustness. Maintainability is more important than Reusability. Sometimes Effeciency moves about, e.g. it might swap places with Maintainability.

If I may put this into a slightly different point of view, but still from the perspective of a commercial development company (eg. The one I work for). Basically it boils down to the cost of an application of its life-time.

(1) Cost to purchaser.
This includes 'Correctness', 'Efficiency', 'Maintainability', ...
A correctly working (100%) program is usually more important than one which
runs as fast as possible. To have a really fast program that produces
rubbish is pretty costly to the purchaser. However, having a program that
is mostly correct (95+%) with a trade-off for speed is also often an
acceptable cost to the purchaser. Having a program that is expensive to
upgrade or fix, is not a cost that purchasers appreciate.

(2) Cost to developer.
This includes 'Maintainability', 'Reusability', 'Robustness', 'Fun', ...
Having program source that is legible (humans can read it without effort),
is a godsend. Having a program that is cheap to maintain and design is a
blessing. The initial coding effort is tiny when compared to the on going
maintenance coding effort. So lots of time needs to be spent in design (get
it right the first time), and quality control. The fun factor influences
the cost of training people and retaining people. If the source coding
effort is boring or too hard, you will lose people regardless of how good
the quality of the program is, or fast it runs. And a boring code base will
have more mistakes than an exciting and interesting one.

> Given that list, correct use of C++ scores (I'm using scores out of five since this is entirely subjective and not intended to be a precise scale)

I don't know C++, Java, Ruby, Python, etc... well enough to comment, so I'll limit myself to generalities.

Interpreted Dynamic-typed languages are both cheap for purchasers and developers. An initial release of a program may not be 100% correct, but the cost of repairs is reasonable. The speed of most all programs, is not an issue! This is because most programs in commercial environments meet bottlenecks long before the code efficiency comes into play. So long as most GUI apps can keep up with the user's keystroke rate, nobody complains. The speed of database systems, networks, and printers, are more likely to be the source of irritation long before the code is worked hard.

Of course, there are some applications that do need to be AFAP, such as a bank's overnight interest calculation and accruals. And these few apps (less than 10% of all the apps) can be targeted for efficient design and coding.

Compiled Static-typed languages are much more expensive to develop and thus to purchase. However, their forte is applications that *must* be fast to run. These are typically non-interactive apps.

So, generally speaking, there is a valid place in the world for both types
(and other types too (assembler, hybrids, ...) ).

With respect to C++ vs D performance, I didn't really care. With respect to their comparative costs, I do care, and it would seem that D can be a winner there.

[snip]
> 
> I'd be interested to hear what others think on all facets of this issue, but I'm particularly keen to hear people's views on how much D presents as a realistic choice for large-scale, commercial developments.
> 
> Thoughts?

D appears to have a *bloody huge* potential to excel in reducing costs to both purchasers and developers. Currently however, it can't be used to produce cheaper commercial software because its still a Work-In-Progress, and has many areas that cry out for tidying up. My concern is not whether or not it can be tidied, but how long will it be before that happens. Using the current method of improving D, I can't see a commercially prudent D (or DMD) for some years to come. Either Walter needs to get some serious extra manpower (not a sexist term BTW), or give up the control of D. Neither of which I can see happening anytime soon. So this saddens me, but not enough to jump ship yet.

-- 
Derek
Melbourne, Australia
1/03/2005 10:30:59 AM
March 01, 2005
"MicroWizard" <MicroWizard_member@pathlink.com> wrote in message news:d00all$11ak$1@digitaldaemon.com...
> It seems to me a joke...
>
>>I just said that if a program is not correct, it's performance is absolutely irrelevant.
>
> Triviality.

Trivial, and obvious. And yet something you disagreed on in your previous post.

Or am I misreading you?

>
>>I have worked on several projects over the last decade where
>>(practical)
>>correctness was of paramount importance. They also had exacting
>>performance requirements. They were written in C/C++. They've all
>>worked
>>fine for months/years without a single failure once they've gone into
>>production. Had they performed well but only been roughly correct,
>>there'd've been losses of $Ms, and the clients wouldn't have cared a
>>toss that the performance was good.
>
> And I worked on several banking, database, stock exchange
> dataproviding,
> industrial automation, bookkeeping bla-bla-bla project (as a
> programmer
> and as project manager also).
> C/C++ were never the _correct_ but a "we should try it because it is
> extremely
> fast" language. The development is painful, few (average) programmers
> understand what it does, easy to make bugs...
> What were correct: Basic, MUMPS, Turbo Pascal, Clipper, FoxPro, LINC,
> DOS batch, nowadays PHP, Java ... they did what we wrote (generally
> :-)
>
> This conversation remembers me an old hungarian TV advertisement.
> Which is almost a joke. Two small children sit in the sand on a
> playground.
> They are vying/talking about what kind of insurance their fathers
> have...
> (Like Tom and the new guy in Mark Twain's Tom Sawyer)
>
> If it hurts you, please forgive me.

Nah, you'll have to try harder. ;)

But I actually don't get your point. The debate has involved, to a significant degree, discussion of the flaws of C/C++, and the related trade-offs of correctness and performance. That's why I mentioned my work. Maybe we work with different kinds of people?

I would say that the worst project I ever worked on - as a Software Quality Manager - involved Java technologies. The programmers there had such little skill - maybe a trait attendant with some Java projects, but I couldn't generalise - that it was an unmitigated disaster. Never experienced anything like it before or since.



March 01, 2005
"Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:d008rv$v8k$1@digitaldaemon.com...
> Maybe they don't care about it (which is what I presume you're getting
at)?

I'd consider that the most likely.

> Other compiler vendors do.

I just tried it with Comeau with "strict mode" http://www.comeaucomputing.com/tryitout/. No warnings.

> I compile my libraries with various versions of ~10 compiler vendors, including GCC. Every single compiler, even Watcom, has provided me with a warning that I have found to be useful that the others have missed. It irks me that I need to go to such lengths, but I value the facility of doing so.

I'd be curious of the results of that snippet (and the other one I posted) run through each. It'd be fun to see the results. Every compiler has its own personality, and the warnings reflect the particular biases of its developer. Warnings are not part of the language standard, and there are no standard warnings, despite some attempts to put them in.

> Your answer to this tiresome activity of the poor unpaid library writer is to just no worry about it. Mine is to pray for (several) standards-violating D compilers. I think both attitudes are regrettable, and I refuse to believe there's not a sensible, and largely agreeable, answer. If there isn't, then one would have to wonder whether that represented an underlying fundamental problem with D.

Why would it be a problem with D and not with the other languages? Anyone implementing a D compiler is free to add warnings to it for whatever they feel it justifiable, just like they are with C++.


March 01, 2005
Using the Arturius Compiler Multiplexer (coming to a download site to you sometime this year .... don't hold your breath):


H:\Dev\Test\compiler\D_warnings>arcc.debug
D_warnings.cpp -c --warning-level=maximum --announce-tools
arcc: Intel C/C++
Tool: bcc/5.6
D_warnings.cpp(5): Warning W8071 : Conversion may lose significant
digits in function foo(char)
D_warnings.cpp(6): Warning W8071 : Conversion may lose significant
digits in function foo(char)
Tool: cw/7
### mwcc.exe Compiler:
#    File: D_warnings.cpp
# -----------------------
#       3:  {
# Warning:  ^
#   function has no prototype
### mwcc.exe Compiler:
#       5:      foo(i);
# Warning:           ^
#   implicit arithmetic conversion from 'int' to 'char'
### mwcc.exe Compiler:
#       5:      foo(i);
# Warning:            ^
#   result of function call is not used
### mwcc.exe Compiler:
#       6:      c = c + 1;
# Warning:               ^
#   implicit arithmetic conversion from 'int' to 'char'
Tool: cw/8
### mwcc.exe Compiler:
#    File: D_warnings.cpp
# -----------------------
#       3:  {
# Warning:  ^
#   function has no prototype
### mwcc.exe Compiler:
#       5:      foo(i);
# Warning:           ^
#   implicit arithmetic conversion from 'int' to 'char'
### mwcc.exe Compiler:
#       5:      foo(i);
# Warning:            ^
#   result of function call is not used
### mwcc.exe Compiler:
#       6:      c = c + 1;
# Warning:               ^
#   implicit arithmetic conversion from 'int' to 'char'
Tool: dm/8.40
Tool: dm/beta-sgi
Tool: dm/beta-stlport
Tool: gcc/2.9.5
Tool: gcc/3.2
Tool: gcc/3.3
Tool: gcc/3.4
Tool: icl/6
Tool: icl/7
D_warnings.cpp(2): remark #1418: external definition with no prior
declaration
Tool: icl/8
D_warnings.cpp(2): remark #1418: external definition with no prior
declaration
Tool: ow/1.2
[OW;Ruby]: D_warnings.cpp(5): Warning! W716: col(9) integral value may
be truncated
D_warnings.cpp(5): Warning! W716: col(9) integral value may be truncated
[OW;Ruby]: D_warnings.cpp(6): Warning! W716: col(11) integral value may
be truncated
D_warnings.cpp(6): Warning! W716: col(11) integral value may be
truncated
[OW;Ruby]: Error: Compiler returned a bad status compiling
'D_warnings.cpp'
Tool: vc/2
D_warnings.cpp(5): warning C4244: 'function' : conversion from 'int' to
'char', possible loss of data
D_warnings.cpp(6): warning C4244: '=' : conversion from 'const int' to
'char', possible loss of data
Tool: vc/4.2
D_warnings.cpp(5): warning C4244: 'function' : conversion from 'int' to
'char', possible loss of data
D_warnings.cpp(6): warning C4244: '=' : conversion from 'const int' to
'char', possible loss of data
Tool: vc/5
D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to
'char', possible loss of data
D_warnings.cpp(6): warning C4244: '=' : conversion from 'int' to 'char',
possible loss of data
Tool: vc/6
D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to
'char', possible loss of data
D_warnings.cpp(6): warning C4244: '=' : conversion from 'int' to 'char',
possible loss of data
Tool: vc/7
D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to
'char', possible loss of data
h:\dev\test\compiler\d_warnings\d_warnings.cpp(8): warning C4717: 'foo'
: recursive on all control paths, function will cause runtime stack
overflow
Tool: vc/7.1
D_warnings.cpp(5): warning C4242: 'argument' : conversion from 'int' to
'char', possible loss of data
h:\dev\test\compiler\d_warnings\d_warnings.cpp(8): warning C4717: 'foo'
: recursive on all control paths, function will cause runtime stack
overflow
Tool: vc/8
D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to
'char', possible loss of data
h:\dev\test\compiler\d_warnings\d_warnings.cpp(8): warning C4717: 'foo'
: recursive on all control paths, function will cause runtime stack
overflow

H:\Dev\Test\compiler\D_warnings>


"Walter" <newshound@digitalmars.com> wrote in message news:d00cg1$13em$1@digitaldaemon.com...
>
> "Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:d008rv$v8k$1@digitaldaemon.com...
>> Maybe they don't care about it (which is what I presume you're getting
> at)?
>
> I'd consider that the most likely.
>
>> Other compiler vendors do.
>
> I just tried it with Comeau with "strict mode" http://www.comeaucomputing.com/tryitout/. No warnings.
>
>> I compile my libraries with various versions of ~10 compiler vendors,
>> including GCC. Every single compiler, even Watcom, has provided me
>> with
>> a warning that I have found to be useful that the others have missed.
>> It
>> irks me that I need to go to such lengths, but I value the facility
>> of
>> doing so.
>
> I'd be curious of the results of that snippet (and the other one I
> posted)
> run through each. It'd be fun to see the results. Every compiler has
> its own
> personality, and the warnings reflect the particular biases of its
> developer. Warnings are not part of the language standard, and there
> are no
> standard warnings, despite some attempts to put them in.
>
>> Your answer to this tiresome activity of the poor unpaid library
>> writer
>> is to just no worry about it. Mine is to pray for (several)
>> standards-violating D compilers. I think both attitudes are
>> regrettable,
>> and I refuse to believe there's not a sensible, and largely
>> agreeable,
>> answer. If there isn't, then one would have to wonder whether that
>> represented an underlying fundamental problem with D.
>
> Why would it be a problem with D and not with the other languages?
> Anyone
> implementing a D compiler is free to add warnings to it for whatever
> they
> feel it justifiable, just like they are with C++.
>
> 


March 01, 2005
In article <d006j3$s40$1@digitaldaemon.com>, Walter says... [snip]
>I disagree strongly on this. Unless you have something else in mind I'm forgetting, in your posts you've focussed in on two or three issues and have assigned major importance to them. None of them, even if they go awry in the manner you predict, will cause the kinds of erratic, random, awful bugs that C++ holes mentioned above will and do cause. All of them, as I've argued (and you and Kris disagree, fair enough), will open the door to other kinds of correctness bugs if they are fixed per your suggestions.


Suggestions? Fixes? What?

How can one get to the point of remedy when one cannot get you to even tentatively admit there might /actually/ be a problem to begin with? That has to happen before one can even begin to weight the odds. You know? That thing about getting the Moose onto the Table? This is what's so infuriating. It's called denial, or stonewalling.

I've tried all kinds of ways to elicit some real honesty about the current design, over almost a year, yet you've been consistent in denying the very possibility of issue (not to mention the measurable quantities of subtle misinformation, which is yet more frustrating). How can you possibly say I, or anyone else, has managed to even begin to offer a resolution when you simply deny, deny, and deny again? And besides, the point is not necessarily for us to offer resolutions, but hopefully to encourage you to think of one.

Where the forthright interaction here? I've yet to witness a discourse upon a serious topic with you, where it did not feel like I'm corresponding with the ex Iraqi Information Minister ~ like there's a whole lot of spin and very little truth. I see you using the same approach with certain others, but the vast majority are spared.

Is this in line with your expectations?

This is what troubles me far more than anything, manifest or not, within the D language itself. I mean, it's just another computer language ~ take it or leave it. What I find disturbing is that neither I, nor anyone else, can make /serious/ critique of the language without a corresponding barrage of denial and something resembling marketing propoganda.

Where's the constructive discourse?

Naturally, you might find that my approach is not to your liking. Perhaps I'm overly brusque for your taste. Perhaps I just get your back up.

Since you've made a point about not replying to my last several posts, if you choose to not answer this one I'll understand the message loud and clear.

- Kris


March 01, 2005
Thanks. It is interesting, and a bit surprising. I'm a bit bemused by the one about no prototype for foo() <g>.


March 01, 2005

Matthew wrote:
>>short=short+5; // OK - 5 fits into short
> 
> 
> Not necessarily
> 
> 
>>int=byte+1+int; // OK, this is the above example
> 
> 
> Not necessarily

And your point is? That + should be forbidden, because the result may overflow?

If int=int+int+int is deemed completely acceptable (or don't you agree even with this?), I don't see how int=byte+5+int can be not acceptable?


xs0