November 14, 2018
On 11/14/2018 9:22 AM, kinke wrote:
> I surely missed some other cool features.

Unicode!

Functional programming support

Arrays of types

No preprocessor
November 14, 2018
On Wednesday, 14 November 2018 at 15:07:46 UTC, lagfra wrote:
> TL;DR: what will D offer with respect to C++ when almost all key features of D are present in C++20(+)?

1. I personally love D's really clean syntax, semantics, and standard library. I also love most of the type system; especially that almost all types have clearly defined sizes. D's overall great design is the reason why I picked it. I don't think its possible for C++ to ever attain that as doing so would practically be making a whole new language.

2. One thing that I don't think gets enough love in D is the fact _everything_ is initialized with a default value (unless you ask it not to). It's frustrating the amount of time I've wasted debugging code in C++ due to forgetting to initialize one variable.

3. Unless I missed something, C++ still doesn't have anything like inout, which removes the need to duplicate property functions for const and non-const.

4. I like Dub. While it could use a few improvements, I find it is still _far_ more usable than make or cmake.

The thing with the features you listed being added to C++ is that most of them are already in D and in the libraries programmed in D. I remember hearing about modules in C++ last year, and they still are not usable. Having to maintain a separate header file is one the big reasons (but not the only) I decided against C++.

Even when these features are available, many code bases won't use them for a while because they want to maintain compatibility. This is especially true on Linux where different distros will have different major versions of GCC and Clang.
November 14, 2018
On 11/14/2018 9:22 AM, kinke wrote:
> * Sane basic types. No need to include a header for the definition of `size_t`, no distinct basic types overlap (C++ long, wchar_t etc.; not 4 distinct types for a byte - C++ `char`, `signed char`, `unsigned char`, and planned `utf8_t`].

It's amazing how much time goes into dealing with C++'s fluid notions of the sizes of those types. "unsigned long" is just awful, varying from memory model to memory model, from platform to platform, especially when C++ uses it for name mangling.

I've wasted endless hours with this, and I know what I'm doing with it. I see others doing it, too. Look at all the countless C++ .h files which start with something like:

  #include <stdint.h>

  /* ELF file format */

  typedef uint16_t Elf32_Half;
  typedef uint32_t Elf32_Word;
  typedef int32_t  Elf32_Sword;
  typedef uint32_t Elf32_Addr;
  typedef uint32_t Elf32_Off;
  typedef uint32_t elf_u8_f32;

or those windows.h "WORD" declarations. Or in phobos\etc\c\zlib\zconf.h:

  typedef unsigned char  Byte;  /* 8 bits */
  typedef unsigned int   uInt;  /* 16 bits or more */
  typedef unsigned long  uLong; /* 32 bits or more */

Every C++ project reinvents this.

Just not having to deal with "is `char` signed or unsigned? will that affect my existing code? does my code still work if `int` is 64 bits?" is a big win.
November 14, 2018
On 11/14/2018 7:33 AM, rikki cattermole wrote:
> Really butchered. From what I can see they never mentioned D in any of the documents (kinda glad tbh). Those documents even question what it should be doing...

The C++ community insists they invented ranges independently from D. It's obviously not true, but they say that for all the D features they've implemented (except for static if).
November 14, 2018
On 11/14/2018 7:57 AM, jmh530 wrote:
> It's actually quite a bit more than I remembered:
> 
> https://ericniebler.github.io/std/wg21/D4128.html#iterator-operations-are-primitive

The trouble with the iterator-pair approach, not mentioned in that article, is that it is not checkable for memory safety. The article mentions as a defect that D ranges can only shrink, and cannot grow. But that's fundamental to memory safety.
November 14, 2018
On Wed, Nov 14, 2018 at 03:09:33PM -0800, Walter Bright via Digitalmars-d wrote:
> On 11/14/2018 7:33 AM, rikki cattermole wrote:
> > Really butchered. From what I can see they never mentioned D in any of the documents (kinda glad tbh). Those documents even question what it should be doing...
> 
> The C++ community insists they invented ranges independently from D.
[...]

That's ridiculous.  Didn't the guy who wrote the C++ range proposal copy the code example from my article on component programming in D?


T

-- 
The early bird gets the worm. Moral: ewww...
November 14, 2018
On 11/14/2018 10:47 AM, Dukc wrote:
> I doubt the shortening distance. While C++ does advance and D isn't moving as fast as it was at 2010 (I think), I still believe C++ isn't the faster evolver of the two. When the next C++ standard comes out, D has improved too. Examples of what might be there by then:

C++ is adding lots of new features. But the trouble is, the old features remain, and people will still use them, and suffer.

Examples:

1. The preprocessor remains. There has never been a concerted effort to find replacements for it, then deprecate it. It's like allowing horse-drawn carts on the road.

2. Strings are still 0-terminated. This is a performance problem, memory consumption problem, and is fundamentally memory unsafe.

3. Arrays still decay to pointers, losing all bounds information.

4. `char` is still optionally signed. What a lurking disaster that is.

5. What size is an `int`?
November 14, 2018
On Wednesday, November 14, 2018 4:25:07 PM MST Walter Bright via Digitalmars-d wrote:
> On 11/14/2018 10:47 AM, Dukc wrote:
> > I doubt the shortening distance. While C++ does advance and D isn't
> > moving as fast as it was at 2010 (I think), I still believe C++ isn't
> > the faster evolver of the two. When the next C++ standard comes out, D
> > has improved too. Examples
> > of what might be there by then:
> C++ is adding lots of new features. But the trouble is, the old features remain, and people will still use them, and suffer.
>
> Examples:
>
> 1. The preprocessor remains. There has never been a concerted effort to find replacements for it, then deprecate it. It's like allowing horse-drawn carts on the road.
>
> 2. Strings are still 0-terminated. This is a performance problem, memory consumption problem, and is fundamentally memory unsafe.
>
> 3. Arrays still decay to pointers, losing all bounds information.
>
> 4. `char` is still optionally signed. What a lurking disaster that is.

All of those are definitely problems, and they're not going away - though occasionally, the preprocessor does come in handy (as much as I agree that on the whole it's better that it not be there).

> 5. What size is an `int`?

While I agree with the point that you're trying to make, that particular type actually isn't really a problem on modern systems in my experience, since it's always 32 bits. Maybe with ARM it's a problem (thus far I've only seriously programmed on x86 and x86-64 machines), and certainly, if you have to deal with 16-bit machines, it's a problem, but for most applications on modern systems at this point, int is always 32 bits. It's long that shoots you in the foot, because that still varies from system to system, and as such, I've always considered long to be bad practice in any C++ code base I've worked on. On the better teams that I've worked on, int has been fine when the size of a type really didn't matter, but otherwise, an integer type has been one of the int*_t types. But even then, when you're dealing with something like printf, you're screwed, because it doesn't understand the types with fixed sizes. So, you're constantly fighting the language and libraries. D's approach of fixing the size of most integer and floating point types is vastly superior, and the problems that we do have there are from the few places where we _didn't_ make them fixed, but since that mostly relates to the size of the memory space, I'm not sure that we really had much choice there. The main outlier is real, though most of the controversy there seems to have do with arguments about the efficiency of the x87 stuff rather than having to do with differences across systems like you get with the integers.

- Jonathan M Davis



November 14, 2018
On 11/14/2018 3:22 PM, H. S. Teoh wrote:
> That's ridiculous.  Didn't the guy who wrote the C++ range proposal copy
> the code example from my article on component programming in D?

:-)

November 14, 2018
On Wed, Nov 14, 2018 at 03:25:07PM -0800, Walter Bright via Digitalmars-d wrote: [...]
> C++ is adding lots of new features. But the trouble is, the old features remain, and people will still use them, and suffer.

To be fair, the same could be said of D too.  E.g., will autodecoding ever be deprecated?  I seriously doubt it, even though it's a significant performance problem. And it's default behaviour, so newcomers will suffer.  Just like in C++ it's *possible* to write in a way that avoids the problems of old features, but newcomers still use old features because they're still there, and usually are what you get when you write code in the most straightforward way (i.e., it's "default behaviour").

Will C integer promotion rules ever be replaced?  Probably never. Treating 'bool' as a non-integral type has already been rejected, in spite of multiple cases of counterintuitive behaviour when resolving overloads.  Similarly for char/int interconversions, even though the whole point of having separate char/wchar/dchar types is to *not* treat UTF encoding units as mere numerical values.

Similarly with many other features and language changes that would have benefitted D, but will probably never happen because of the almost paranoid fear of breaking existing code, which isn't all that different from C++'s backward-compatibility mantra.


> Examples:
> 
> 1. The preprocessor remains. There has never been a concerted effort to find replacements for it, then deprecate it. It's like allowing horse-drawn carts on the road.

Has there been a concerted effort to find a way to migrate existing D codebases away from autodecoding so that it can be eventually removed?


> 2. Strings are still 0-terminated. This is a performance problem, memory consumption problem, and is fundamentally memory unsafe.
> 
> 3. Arrays still decay to pointers, losing all bounds information.
> 
> 4. `char` is still optionally signed. What a lurking disaster that is.
> 
> 5. What size is an `int`?

All good points, except for one fly in the ointment: 'real'. :-D  The only D type with variable size, it's also a hidden performance degrader (significantly slower on new hardware because x87 hardware hasn't been updated for decades whereas IEEE standard floats like float/double have seen continuous improvement in hardware support over the last decade). Especially when where code carefully crafted to use only float/double still get implicit conversion to/from real when calling certain functions in std.math, causing significant performance degradation.


Not saying that D isn't superior to C++ in many ways (if it wasn't, I wouldn't be here), but one ought to be careful not to end up being the proverbial pot calling kettle black.


T

-- 
It is of the new things that men tire --- of fashions and proposals and improvements and change. It is the old things that startle and intoxicate. It is the old things that are young. -- G.K. Chesterton