October 04, 2009
Sun, 04 Oct 2009 11:08:52 -0500, Andrei Alexandrescu thusly wrote:

> So, you say D lacks built-in
> first-class sum and product types. Yet Tuple is a product type. In spite
> of appearances, it's a built-in type, just that it has no literal.

Not true. A tuple of tuples, for instance, breaks the property (so you need struct tuple hacks). The auto-flattening is just harmful. Also, not only does it not have a literal, in many places the use has been disabled. Recent versions of the compiler have started to throw errors in those use cases. Previously I expected it to be fixed, but apparently the feature was considered too good to be allowed.

> don't see that a deal breaker. Then I fail to find fault for Algebraic (in std.variant) as a sum type. I need to add visitation to it, but other than that I don't think Algebraic is worse than a built-in type.

Ok, might be. I have not used it yet. It least it's too verbose for my taste. Too much verboseness makes the feature impractical to use.

>> I remember you also suggested all kinds of macro systems, but the discussion died ages ago.
> 
> It hasn't died. We just concluded that it would take many months to define and implement a decent macro system. We also had a ton of other things to do, so we decided macros have to wait.

Ok.

> I am very familiar with much of Odersky's work and have a lot of respect for it. But then Walter created D and has brought his world view in D, not someone else's. We can't go like, hey, let's wheelbarrow whatever's good in language X into D. That's why I specifically asked "what steps we need to take" hoping for much more detail and aim at integration than "Scala is good".

I agree you don't need to copy each feature. There just are some open problems and it would be really nice if the language could solve them. Since D is a practical language, you might say that it doesn't need to solve every possible problem (especially not high level problems), just some low-level systems programming related ones.

> Regarding the Node-Edge subtyping problem, I'd appreciate a link.

http://lampwww.epfl.ch/~odersky/papers/ScalableComponent.html, page 7 in the pdf.
October 04, 2009
Jeremie Pelletier wrote:
> language_fan wrote:
>> I admitted that later. Some of the keywords have a strong justification behind them. Others feel irritatingly unnecessary.
> 
> I would rather have many different specialized keywords than a few keywords with many different meanings. Its *much* easier to remember a large set of simple words than a small set of complex words.

Many of the keywords come from each basic type having its own keyword. Sure, it could be done like C does with "unsigned long", etc., but those were always hard to grep for.

Also, the complex and imaginary types will be removed at some point and replaced with a library type; there goes 6 keywords.
October 04, 2009
Nick Sabalausky wrote:
> Fair enough. *But*, I really think "elegantly simple" language design is double-edged sword. In my experience, and I think this is what Jeremie was alluding to, I've found that an "elegantly simple" language, no matter how well-chosen the primitives are, generally results in a problematic lack of expressiveness and a frequent sense of fighting against the language instead of merely using it.

It's a good point. One finds when programming in a simple language that one has to write a lot of rather complex code to make up for it. C is an obvious example - try writing OOP in C. It can and has been done, but it's ugly, verbose, complex, error-prone and inelegant.



> It's like a professional handyman having the smallest possible possible toolbox with only the barest essentialls, versus a big super-toolbox that has all the *right* tools he might need. Just because it's there doesn't mean it has to be used, but if I were a handyman and had to remove a phillips-head screw, I'd want to be able to reach for a forward/reverse drill and an appropriately-sized phillips-head bit, and not have to pry it out with the bare minimum (the back of a hammer, or a sort-of-sized-similarly manual flathead screwdriver), and also not have to put one specialized mini-toolbax back and switch to a differently-specialized mini-toolbox for every different task.

That resonates with me. When I was a kid working on cars, I had nothing but the most basic tools. You could get things done, but the workarounds were unpleasant and difficult, and I often wound up damaging the parts in the process. Now, I just go buy the specialized tool, and get it done quickly and easily, and no damage.

For example, it's so nice to have a drill press and get the hole *straight* <g>.
October 04, 2009
Walter Bright wrote:
> Jeremie Pelletier wrote:
>> language_fan wrote:
>>> I admitted that later. Some of the keywords have a strong justification behind them. Others feel irritatingly unnecessary.
>>
>> I would rather have many different specialized keywords than a few keywords with many different meanings. Its *much* easier to remember a large set of simple words than a small set of complex words.
> 
> Many of the keywords come from each basic type having its own keyword. Sure, it could be done like C does with "unsigned long", etc., but those were always hard to grep for.

I agree, especially since most libraries redefine these types to not have to use "unsigned long" and others all over the place and to abstract different compilers.

Having standard types in D is one of it's best features, just makes everything much easier.

> Also, the complex and imaginary types will be removed at some point and replaced with a library type; there goes 6 keywords.

Why? What's the rationale behind such a move? These types will always be handled the same no matter what library implements them. These are always tricky to use in C since different compilers implement them differently, why do the same in D?
October 04, 2009
Jeremie Pelletier wrote:
> Walter Bright wrote:
>> Also, the complex and imaginary types will be removed at some point and replaced with a library type; there goes 6 keywords.
> 
> Why? What's the rationale behind such a move? These types will always be handled the same no matter what library implements them. These are always tricky to use in C since different compilers implement them differently, why do the same in D?

Using a standard library type solves the standardization problem. The big reason for moving it to a library type is the user defined type capabilities of D have grown to the point where there is no longer much of any advantage to having it built in.

Simplifying the internal logic of the compiler then has a lot of advantages.
October 04, 2009
Jeremie Pelletier Wrote:

> Walter Bright wrote:
> > Jeremie Pelletier wrote:
> >> language_fan wrote:
> >>> I admitted that later. Some of the keywords have a strong justification behind them. Others feel irritatingly unnecessary.
> >>
> >> I would rather have many different specialized keywords than a few keywords with many different meanings. Its *much* easier to remember a large set of simple words than a small set of complex words.
> > 
> > Many of the keywords come from each basic type having its own keyword. Sure, it could be done like C does with "unsigned long", etc., but those were always hard to grep for.
> 
> I agree, especially since most libraries redefine these types to not have to use "unsigned long" and others all over the place and to abstract different compilers.
> 
> Having standard types in D is one of it's best features, just makes everything much easier.
> 
> > Also, the complex and imaginary types will be removed at some point and replaced with a library type; there goes 6 keywords.
> 



> > Many of the keywords come from each basic type having its own keyword. Sure, it could be done like C does with "unsigned long", etc., but those were always hard to grep for.

Agree 110%.  Not only hard to grep for, but also an example of orthogonality not actually helping.
The number of C/C++ programs that I've had to deal over the years, all defining their own standard
library int types just to nail down the bit size is mind boggling.
UINT8, uint8, UInt8, SINT8, int8, Int8, int8_t, uint8_t, ubyte, sbyte ...  and these just some of the
8 bit variations.

Then to make thinks worse, someone invents compiler switches to make ints signed or unsigned by default.  What madness.

> Why? What's the rationale behind such a move? These types will always be handled the same no matter what library implements them. These are always tricky to use in C since different compilers implement them differently, why do the same in D?

I'm with Jeremie on this one .. or at least the jury should still be out.

Imaginary numbers have the same right to life as real numbers.  How many scientific, engineering, applied maths problems have been solved because of the invention or discovery of complex numbers? I like the idea of a language treats its complex numbers as first class citizens.

Come to think of it, it was one of the first salient features of D that drew me to the language.

I speak not only with an emotive affection towards complex numbers but with many years of practical
experience with DSP (digital signal processing) software, software development at the coalface using
GMM (gaussian mixture models) for speech processing, FFT (Fast Fourier Transforms) in general and the
FFTW (The Fastest Fourier Transform in the West*) C FFT library (which by the way, the authors thereof
received a prestigious award for their contribution to numerical software**).

* http://www.fftw.org/

** http://www.mcs.anl.gov/research/opportunities/wilkinsonprize/3rd-1999.php

It's a difficult challenge to get high performance, readable and maintainable code out of complex number intensive algorithms.   Use of library types for complex numbers has, in my experience been problematic. Complex numbers should be first class value types I say.

My $0.05

-- Justin Johansson

October 04, 2009
"Justin Johansson" <no@spam.com> wrote in message news:haavf1$2gs7$1@digitalmars.com...
>
> It's a difficult challenge to get high performance, readable and
> maintainable code out of complex number
> intensive algorithms.   Use of library types for complex numbers has, in
> my experience been problematic.
> Complex numbers should be first class value types I say.
>

There's been discussion before (I can't find it now, or remember the name for it) of type systems that allow for proper handling of things like m/s vs. m/(s*s) vs inch/min etc. I haven't actually worked with such a feature or with complex/imaginary numbers in any actual code, so I can't be sure, but I've been wondering if a type system like that would be an appropriate (or even ideal) way to handle real/complex/imaginary numbers. (I've also been wondering if it might be a huge benefit for distinguishing between strings that represent a filename vs file content vs file-extention-only vs relative-path+filename, vs absolute-path-only, etc. I've been really wanting a better way to handle that than just a variable naming convention.)


October 04, 2009
Walter Bright:

> The big reason for moving it to a library type is the user defined type capabilities of D have grown to the point where there is no longer much of any advantage to having it built in.

If the compiler/language is now flexible enough to allow the creation of a very good complex number, and the compilation time for such library numbers is good enough, and they get compiled efficiently enough, then removing them from the language is positive. But is the compiler now good enough to allow to implement very good complex numbers in the std lib?

One problem is to have a good syntax to define and use complex numbers. Time ago I have even suggested to keep the complex syntax in the compiler, and move the implementation in the std lib.

Another problem that I think is present still is the lack of a method like opBool, that gets called in implicit boolean situations like if(somecomplex){...

A third and more serious question is if the library complex type avoids the pitfalls discussed in the page about built-in complex numbers in the digitalmars site.

Bye,
bearophile
October 04, 2009
Nick Sabalausky wrote:
> "Justin Johansson" <no@spam.com> wrote in message news:haavf1$2gs7$1@digitalmars.com...
>> It's a difficult challenge to get high performance, readable and maintainable code out of complex number
>> intensive algorithms.   Use of library types for complex numbers has, in my experience been problematic.
>> Complex numbers should be first class value types I say.
>>
> 
> There's been discussion before (I can't find it now, or remember the name for it) of type systems that allow for proper handling of things like m/s vs. m/(s*s) vs inch/min etc. I haven't actually worked with such a feature or with complex/imaginary numbers in any actual code, so I can't be sure, but I've been wondering if a type system like that would be an appropriate (or even ideal) way to handle real/complex/imaginary numbers.

It better be. Complex numbers aren't that complicated of a notion. What's lost in pulling them out of the language is the ability to define literals. Now please name five remarkable complex literals.

The feature you're referring to is called dimensional analysis.

> (I've also been wondering if it might be a huge benefit for distinguishing between strings that represent a filename vs file content vs file-extention-only vs relative-path+filename, vs absolute-path-only, etc. I've been really wanting a better way to handle that than just a variable naming convention.) 

I don't quite think so. In fact I don't think so at all. Pathnames of various flavors evolve quite a bit in many programs, and having to worry about tracking their type throughout is too much aggravation to be worthwhile. Last thing I'd want when manipulating pathnames would be a sticker of a library slapping my wrist anytime I misuse one of its six dedicated types.


Andrei
October 04, 2009
bearophile wrote:
> Walter Bright:
> 
>> The big reason for moving it to a library type is the user defined
>> type capabilities of D have grown to the point where there is no
>> longer much of any advantage to having it built in.
> 
> If the compiler/language is now flexible enough to allow the creation
> of a very good complex number, and the compilation time for such
> library numbers is good enough, and they get compiled efficiently
> enough, then removing them from the language is positive. But is the
> compiler now good enough to allow to implement very good complex
> numbers in the std lib?

Quoting myself: please name five remarkable complex literals.

> One problem is to have a good syntax to define and use complex
> numbers. Time ago I have even suggested to keep the complex syntax in
> the compiler, and move the implementation in the std lib.
> 
> Another problem that I think is present still is the lack of a method
> like opBool, that gets called in implicit boolean situations like
> if(somecomplex){...

That's not opBool, it's opIf. Testing with if does not mean conversion to bool and then testing the bool.

> A third and more serious question is if the library complex type
> avoids the pitfalls discussed in the page about built-in complex
> numbers in the digitalmars site.

I'd love to hear more about that. I've asked several times about it and never got a clear answer. My feeling is that an obscure mathematician burped during a conference in the 1960s and was overheard and misunderstood by someone who spread the news that complex numbers must be built-in, or else.


Andrei