February 28, 2005 Re: Method resolution sucks | ||||
---|---|---|---|---|
| ||||
Posted in reply to Anders F Björklund | "Anders F Björklund" <afb@algonet.se> wrote in message news:cvv5jq$2m2o$1@digitaldaemon.com... > Like Walter said: "a string literal begins life with no type", but integer literals (without L) still begins life being of the type: int. That's right. It's a bit subtle, but a key difference. |
February 28, 2005 Re: Method resolution sucks | ||||
---|---|---|---|---|
| ||||
Posted in reply to Matthew | "Matthew" <admin.hat@stlsoft.dot.org> wrote in message news:cvv05b$2gnu$2@digitaldaemon.com... > But the example itself is bogus. Let's look at it again, with a bit more flesh: > > byte b; > > b = 255 > > b = b + 1; > > Hmmm .... do we still want that implicit behaviour? I think not!! Yes, we do want it. Consider the following: byte b; int a,c; ... a = b + 1 + c; Do you really want the subexpression (b + 1) to "wrap around" on byte overflow? No. The notion of the default integral promotions is *deeply* rooted in the C psyche. Breaking this would mean that quite a lot of complicated, debugged, working C expressions will subtly break if transliterated into D. People routinely use the shorter integral types to save memory, and the expressions using them *expect* them to be promoted to int. D will just get thrown out the window by any programmer having to write: a = cast(int)b + 1 + c; The default integral promotion rules are what makes the plethora of integral types in C (and D) usable. |
February 28, 2005 Re: Method resolution sucks | ||||
---|---|---|---|---|
| ||||
Posted in reply to Matthew | "Matthew" <admin.hat@stlsoft.dot.org> wrote in message news:cvv05a$2gnu$1@digitaldaemon.com... > Sorry, but that's only required if all integral expressions must be promoted to (at least) int. Right. > Who says that's the one true path? Breaking that would make transliterating C and C++ code into D an essentially impractical task. It's even worse than trying to "fix" C's operator precedence levels. > Why is int the level at which it's drawn? Because of the long history of the C notion of 'int' being the natural size of the underlying CPU, and the incestuous tendency of CPU's to be designed to execute C code efficiently. (Just to show what kind of trouble one can get into with things like this, the early Java spec required floating point behavior to be different than how the most popular floating point hardware on the planet wanted to do it. This caused Java implementations to be either slow or non-compliant.) CPUs makers, listening to their marketing department, optimize their designs for C, and that means the default integral promotion rules. Note that doing short arithmetic on Intel CPUs is slow and clunky. Note that when the initial C standard was being drawn up, there was an unfortunate reality that there were two main branches of default integral promotions - the "sign preserving" and "value preserving" camps. They were different in some edge cases. One way had to be picked, the newly wrong compilers had to be fixed, and some old code would break. There was a lot of wailing and gnashing and a minor earthquake about it, but everyone realized it had to be done. That was a *minor* change compared to throwing out default integral promotions. > Why can there not a bit more contextual information applied? Is there no smarter way of dealing with it? Start pulling on that string, and everything starts coming apart, especially for overloading and partial specialization. D, being derived from C and C++ and being designed to appeal to those programmers, is designed to try hard to not subtly break code that looks the same in both languages. CPUs are designed to execute C semantics efficiently. That pretty much nails down accepting C's operator precedence and default integral promotion rules as is. |
February 28, 2005 Re: Method resolution sucks | ||||
---|---|---|---|---|
| ||||
Posted in reply to Georg Wrede | "Georg Wrede" <georg.wrede@nospam.org> wrote in message news:4222F26F.8070509@nospam.org... > > I'm pretty familiar with warnings. If a compiler has 15 different warnings, > > then it is compiling 15! (15 factorial) different languages. Warnings tend > > Assuming 15 different on-offable compiler warning switches, one would tend to think that it should be 2^15 different languages? Or did I miss something? It's factorial. Using 2^15 creates duplicates, since the order of the switches doesn't matter. |
February 28, 2005 C++ vs D for performance programming | ||||
---|---|---|---|---|
| ||||
Posted in reply to Matthew | There's a lot there, and I just want to respond to a couple of points. 1) You ask what's the bug risk with having a lot of casts. The point of a cast is to *escape* the typing rules of the language. Consider the extreme case where the compiler inserted a cast whereever there was any type mismatch. Typechecking pretty much will just go out the window. 2) You said you use C++ every time when you want speed. That implies you feel that C++ is inherently more efficient than D (or anything else). That isn't my experience with D compared with C++. D is faster. DMDScript in D is faster than the C++ version. The string program in D www.digitalmars.com/d/cppstrings.html is much faster in D. The dhrystone benchmark is faster in D. And this is using an optimizer and back end that is designed to efficiently optimize C/C++ code, not D code. What can be done with a D optimizer hasn't even been explored yet. There's a lot of conventional wisdom that since C++ offers low level control, that therefore C++ code executes more efficiently. The emperor has no clothes, Matthew! I can offer detailed explanations of why this is true if you like. I challenge you to take the C++ program in www.digitalmars.com/d/cppstrings.html and make it faster than the D version. Use any C++ technique, hack, trick, you need to. |
February 28, 2005 Re: Method resolution sucks | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | Just a comment from the sidelines - feel free to ignore. To me it doesn't look like Walter is in any hurry to flag narrowing casts as warnings (or add any warnings into D at all). And it doesn't look like everyone else is willing to give up without a fight. Why not use this as an opportunity to get started on a D lint program? There are enough skilled people with a stake in this particular issue alone to make it worth while. Walter - how hard, in your opinion, is it to add lint like checking into the GPL D frontend? Can it be some in a painless way so that future updates to the frontend have low impact on the lint code? Brad |
February 28, 2005 Re: Method resolution sucks | ||||
---|---|---|---|---|
| ||||
Posted in reply to Derek | "Derek" <derek@psych.ward> wrote in message news:qmpq5b84012d$.1un5lj704bsoa$.dlg@40tude.net... > And you are still absolutely positive that 12 is only an int? Yes. > Could it not have been a uint? If not, why not? It needs a type assigned at the bottom level. So it gets an "int". The type inference semantics in C, C++ and D are "bottom up". > And yet, if the compiler had chosen to assume 12 was a uint, then there would have been an exact match. Right. But there you're arguing for "top down" type inference, which is something very, very different from how C, C++ and D handle expressions. > However, isn't 'inout int' very much similar in nature to the C++ '*int' pointer concept. It's more akin to the C++ "int&" concept. > And our beloved C++ would notice that an 'int' is very > much different to a '*int', so why should D see no difference between > 'inout int' and 'in int'. C++ overloading rules with int and int& are complicated and a rich source of bugs with overloading and template partial specialization. Even experts have trouble with it, as Matthew and I were working on just that a couple days ago. (The problems come from "when are they the same, and when are they different.") > But aside from that, what would be the consequence of having the compiler take into consideration the storage classes of the signature? And this is a > serious question from someone who is trying to find some answers to that very question. It is not a rhetorical one, nor is it trolling. It's a very good question. Consider the following: void foo(int i); void foo(inout int i); Is it really a good idea to pick different overloads based on whether the argument is a literal or a variable, but they are the *same* types? |
February 28, 2005 Re: Method resolution sucks | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | "Walter" <newshound@digitalmars.com> wrote in message news:cvvp0e$b66$1@digitaldaemon.com... > > "Matthew" <admin.hat@stlsoft.dot.org> wrote in message news:cvv05a$2gnu$1@digitaldaemon.com... >> Sorry, but that's only required if all integral expressions must be > promoted to (at least) int. > > Right. > >> Who says that's the one true path? > > Breaking that would make transliterating C and C++ code into D an essentially impractical task. It's even worse than trying to "fix" C's operator precedence levels. > >> Why is int the level at which it's drawn? > > Because of the long history of the C notion of 'int' being the natural > size > of the underlying CPU, and the incestuous tendency of CPU's to be > designed > to execute C code efficiently. (Just to show what kind of trouble one > can > get into with things like this, the early Java spec required floating > point > behavior to be different than how the most popular floating point > hardware > on the planet wanted to do it. This caused Java implementations to be > either > slow or non-compliant.) CPUs makers, listening to their marketing > department, optimize their designs for C, and that means the default > integral promotion rules. Note that doing short arithmetic on Intel > CPUs is > slow and clunky. > > Note that when the initial C standard was being drawn up, there was an > unfortunate reality that there were two main branches of default > integral > promotions - the "sign preserving" and "value preserving" camps. They > were > different in some edge cases. One way had to be picked, the newly > wrong > compilers had to be fixed, and some old code would break. There was a > lot of > wailing and gnashing and a minor earthquake about it, but everyone > realized > it had to be done. That was a *minor* change compared to throwing out > default integral promotions. > >> Why can there not a bit more contextual information applied? Is there no smarter way of dealing with it? > > Start pulling on that string, and everything starts coming apart, > especially > for overloading and partial specialization. > > D, being derived from C and C++ and being designed to appeal to those > programmers, is designed to try hard to not subtly break code that > looks the > same in both languages. CPUs are designed to execute C semantics > efficiently. That pretty much nails down accepting C's operator > precedence > and default integral promotion rules as is. Ok, I'm sold on integral promotion. Then the answer is that we must have narrowing warnings. I can't see a sensible alternative. Sorry |
February 28, 2005 Re: Method resolution sucks | ||||
---|---|---|---|---|
| ||||
Posted in reply to Georg Wrede | "Georg Wrede" <georg.wrede@nospam.org> wrote in message news:42231636.8090805@nospam.org... > Give a toy, and they're happy > for the day. But what to do to get them happy _for the rest of their > lives_? Ultimately, they decide for themselves if they're going to be happy or not. You cannot cause them to be happy. Probably the best you can do is help them realize this, and that if they expect that things or other people will make them happy, they'll be disappointed. |
February 28, 2005 Re: C++ vs D for performance programming - furphy! | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | > There's a lot there, and I just want to respond to a couple of points. Yes. I was faced with a choice to do my final STLSoft tests on Linux, or prate on to an unwilling audience. ... > 1) You ask what's the bug risk with having a lot of casts. The point > of a > cast is to *escape* the typing rules of the language. Consider the > extreme > case where the compiler inserted a cast whereever there was any type > mismatch. Typechecking pretty much will just go out the window. This point is lost on my. Why would the compiler insert casts at every mismatch? Do you mean the user? AFACS, the compiler already is inserting casts wherever there is an integral type mismatch. > 2) You said you use C++ every time when you want speed. That implies > you > feel that C++ is inherently more efficient than D (or anything else). > That > isn't my experience with D compared with C++. D is faster. DMDScript > in D is > faster than the C++ version. The string program in D > www.digitalmars.com/d/cppstrings.html is much faster in D. The > dhrystone > benchmark is faster in D. > > And this is using an optimizer and back end that is designed to > efficiently > optimize C/C++ code, not D code. What can be done with a D optimizer > hasn't > even been explored yet. > > There's a lot of conventional wisdom that since C++ offers low level > control, that therefore C++ code executes more efficiently. The > emperor has > no clothes, Matthew! I can offer detailed explanations of why this is > true > if you like. > > I challenge you to take the C++ program in > www.digitalmars.com/d/cppstrings.html and make it faster than the D > version. > Use any C++ technique, hack, trick, you need to. Not so. I choose C/C++ for speed over established languages Ruby/Python/Java/.NET, for the eminently sensible and persuasive reasons given. (One day I'll take up that challenge, btw. <g>) The reasons I choose C++ over D are nothing to do with speed. That was kind of the point of my whole rant. It concerns me a little that you respond relatively voluminously to the perceived slight on speed, but not about my concerns about D's usability and robustness. I believe that software engineering has a pretty much linear order of concerns: Correctness - does the chosen language/libraries/techniques promote the software actually do the right thing? Robustness - does the language/libraries/techniques support writing software that can cope with erroneous environmental conditions _and_ with design violations (CP) Efficiency - does the language/libraries/techniques support writing software that can perform with all possible speed (in circumstances where that's a factor) Maintainability - does the language/libraries/techniques support writing software that is maintainable Reusability - does the language/libraries/techniques support writing software that may be reused As I said, I think that, in most cases, these concerns are in descending order as shown. In other words, Correctness is more important than Robustness. Maintainability is more important than Reusability. Sometimes Effeciency moves about, e.g. it might swap places with Maintainability. Given that list, correct use of C++ scores (I'm using scores out of five since this is entirely subjective and not intended to be a precise scale) Correctness - 4/5: very good, but not perfect; you have to know what you're doing Robustness - 3/5: good, but could be much better Efficiency - 4/5: very good, but not perfect. Maintainability - 2/5: yes, but it's hard yacka. If you're disciplined, you can get 4/5, but man! that's some rare discipline Reusability - 2/5: pretty ordinary; takes a lot of effort/skill/experience/luck I also think there's an ancillary trait, of importance purely to the programmer Challenge/Expressiveness/Enjoyment - 4/5 I'd say Ruby scores Correctness - 3/5: it's a scripting language, so you need to be able to test all your cases!! Robustness - 2/5: hard to ensure you're writing correctly without; hard to have asserts Efficiency - 2/5: scripting language Maintainability - 4/5: pretty easy Reusability - 4/5: if you write your stuff in modules, it's really quite nice Challenge/Expressiveness/Enjoyment - 5/5 I'd say Python scores Correctness - 4/5: Robustness - 3/5: Efficiency - 2/5: Maintainability - 4/5: Reusability - 5/5: Challenge/Expressiveness/Enjoyment - 2/5 I'd say Java/.NET score: Correctness - 4/5: it might be like pulling teeth to use, but you _can_ write very robust software in them Robustness - 3.5/5: Efficiency - 3/5: Maintainability - 4/5: pretty easy Reusability - ~5/5: whatever what one might think of these horrid things, they have damn impressive libraries. Challenge/Expressiveness/Enjoyment - 0/5 I'd be very interested to hear what people think of D. IMO, D may well score 4.75 out of 5 on performance (although we're waiting to see what effect the GC has in large-scale high-throughput systems), but it scores pretty poorly on correctness. Since Correctness is the sine qua non of software - there's precisely zero use for a piece of incorrect software to a client; ask 'em! - it doesn't matter if D programs perform 100x faster than C/C++ programs on all possible architectures. If the language promotes the writing of write buggy software, I ain't gonna use it. D probably scores really highly on Robustness, with unittests, exceptions, etc. But it's pretty damn unmaintainable when we've things like the 'length' pseudo keyword, and the switch-default/return thing. I demonstrated only a couple of days ago how easy it was to introduce broken switch/case code with nary a peep from the compiler. All we got from Walter was a "I don't want to be nagged when I'm inserting debugging code". That's just never gonna fly in commercial development. As for reusability, I know it aims to be very good, and I would say from personal experience that it's quite good. But I've thus far only done small amounts of reuse, using primarily function APIs. I know others have experienced problems with name resolution, and then there's the whole dynamic class loading stuff. However optimistically we might view it, it's not going to be anywhere near the level of Java/.NET/Python, but it absolutely must, and can, be better that C++. So, I'd give two scores for D. What I think it is now: Correctness - 2/5: Robustness - 3/5: Efficiency - ~5/5: Maintainability - 1/5: Reusability - 2/5: Challenge/Expressiveness/Enjoyment - 3/5 What I think it can be: Correctness - 4/5: Robustness - 4/5: Efficiency - ~5/5: Maintainability - 3/5: Reusability - 4/5: Challenge/Expressiveness/Enjoyment - 4/5 What do others think, both "is" and "could be"? Anyway, that really is going to be my last word on the subject. Unless and until D crawls out of the bottom half in Correctness and Maintainability, I just don't see a future for it as a commercial language. (And as I've said before, I'm _not_ going to stop barracking for that, because I *really* want to use it for such. The pull of the potential of DTL is strong, Luke ...) I'd be interested to hear what others think on all facets of this issue, but I'm particularly keen to hear people's views on how much D presents as a realistic choice for large-scale, commercial developments. Thoughts? |
Copyright © 1999-2021 by the D Language Foundation