December 29, 2007
Derek Parnell wrote:
> Will your syntax allow declarations of single values? e.g.
> 
>   enum x = 3;
>   enum y = 4L;

Yes.
December 29, 2007
Lars Ivar Igesund wrote:

>> It'll do.  I'd say it's bad, but not that bad (I think "manifest" looked
>> better, personally -- almost pays to go back in time and start adopting
>> the ideas that worked several decades ago... if they indeed did work).
>>
>> Like everything in D, it's one of:
>>
>> (1) we eventually get used to it
>> (2) it eventually gets rejected and deprecated by the designer(s)
>> (3) or d gets abandoned. :-P
> 
> Considering that none of the "established" bad decisions from 1.0 seems to
> be fixed (yet) in 2.0, we are currently gaining in the bad end? Even if
> there is a lot of nice stuff in there too.
> 
> FWIW, I think enum is on the same level as foreach_reverse, although that
> one exposed the problem with keywording such a special case. The usecase
> implemented with enum is at least valid enough.
> 


True.  Can't argue with that.

-JJR
December 29, 2007
James Dennett wrote:
> No, they don't.

You're right, I should have gone back and checked the spec.
December 29, 2007
Steven Schveighoffer wrote:
> You missed the point of my example :)  I'm not debating anonymous enumerations and whether they should exist or not.  I'm saying that the definition of enum working like an enumeration, or working like a manifest list, based on whether the enumeration is named or anonymous, is confusing. If you look at the example, the type of the value changes from one version to the next, even though no type is specified.

That's a good point.

>> Extending it to allow heterogeneous types is not a big step, nor does it break any existing code or usage.
> 
> Just because you can do something that changes the meaning of a keyword, especially to a meaning that is not described well by the english meaning of the word, and still have existing code compile, doesn't mean you should.

It doesn't mean you shouldn't, either. This goes on all the time in programming languages. After all, ! doesn't mean exclamation. The reason we even have to invent programming languages is because english is too imprecise and ambiguous. If anyone tries to learn programming by using Webster's, they're in for some pretty tough sledding :-)

That said, I would still shrink from using an utterly contradictory meaning, like having the keyword "and" actually do an "or", but there isn't that problem here.


> Having heterogeneous types in the same enum braces is a big step, because it fundamentally says 'enum is not an enumeration'.

I don't see any fundamental reason why an enumeration's contents must all be the same type. You could convincingly argue that they all must be somehow related to each other, but that doesn't require they be related by type. Grouping semantically related ones together would be the purview of the programmer.
December 29, 2007
Walter Bright wrote:
> Jérôme M. Berger wrote:
>>     :(
> 
> Yeah, I figure I'll get fricasseed over that one. The most compelling argument is that we already have 3 ways to declare a constant, adding a fourth gets very difficult to justify. As opposed to a minor extension to enums.

This statement really disturbs me, as this is clearly a major change to an enum.
I think the consequences are quite horrible.

The primary function of enum is to create a TYPE based on a GROUP of related INTEGRAL constants. You can abuse the facility to declare integral constants, but that's a secondary feature at best. The problem is that using enum for abtritrary types is a very poor match for the primary feature of enums.

We don't want to create a type; we don't want a grouping; and the values are not integral.

enum : int { A=2, B, C }

Having an enum automatically get the 'next' value is one of the key feature of enums, and it relies on the base type being an enumerable type. char [], structs, and floating-point types don't have that behaviour. You're guaranteed that integral types remain as a special case.

In another post, you mentioned that this would become allowed, to reduce the effect of the special case:
enum : float { A=2, B, C }

Which leads to the same nonsense you get with operator ++ on floats; a++ is not necessarily different to a. I would hope that this feature never actually gets used. Does this compile?

enum : cfloat { A=2, B, C }

The important point is that you are now trying to minimise the effect of the special case which has been created. But there's no way to get rid of it, because it is fundamental to the nature of enums.
AFAICT there's also special cases related to the grouping (named vs unnamed enums) and with regard to the typing (name mangling).
December 29, 2007
On 12/28/07, Derek Parnell <derek@psych.ward> wrote:
> On Fri, 28 Dec 2007 22:56:27 +0000, Janice Caron wrote:
>
> > On 12/28/07, Derek Parnell <derek@psych.ward> wrote:
> >> In order to do that, won't the compiler need to have access to all the source code for the application?
> >
> > No.
>
> Why not?
>
> If what we are talking about is having the compiler detect if some code is taking the address of a constant value, doesn't the compiler need to see the code that does that? And if that code is in an object file and not a source file, then how will the compiler find out that the address is being taken?

I was going to answer this, but Walter got there first and answered it before me. Ho hum. Still, I'll paraphrase.

In file_1:

    const int x = 42;

/could/, if we were applying this logic, compile to an object file in which zero bytes were reserved for x. Then, in file_2:

    import file_1;
    const int * px = &x;

the object file would contain storage for x, in a segment all by itself. The clever part is that the /linker/, not the compiler, is able to remove duplicate segments, so if the same segment occurs in file_3.obj or file_4.obj, it will only appear in the final .exe once.
December 29, 2007
On 12/29/07, Don Clugston <dac@nospam.com.au> wrote:
> Does this compile?
>
> enum : cfloat { A=2, B, C }

Even more interestingly, what about

    enum : ifloat { a = 2i, b };

If b has to equal (a+1), then there is no way it can be the same type as a.
December 29, 2007
Okay, just let's reintroduce #define! [duck=]

regards, frank
December 29, 2007
Don Clugston wrote:
> Walter Bright wrote:
>> Jérôme M. Berger wrote:
>>>     :(
>>
>> Yeah, I figure I'll get fricasseed over that one. The most compelling argument is that we already have 3 ways to declare a constant, adding a fourth gets very difficult to justify. As opposed to a minor extension to enums.
> 
> This statement really disturbs me, as this is clearly a major change to an enum.

It doesn't change existing usage of enum.

> I think the consequences are quite horrible.
> 
> The primary function of enum is to create a TYPE based on a GROUP of related INTEGRAL constants.

None of the enhancements impair this.

> You can abuse the facility to declare integral constants, but that's a secondary feature at best. The problem is that using enum for abtritrary types is a very poor match for the primary feature of enums.
> 
> We don't want to create a type; we don't want a grouping; and the values are not integral.
> 
> enum : int { A=2, B, C }
> 
> Having an enum automatically get the 'next' value is one of the key feature of enums, and it relies on the base type being an enumerable type.

The only thing it relies on is the ability to add 1 to the previous value. The new enums can automatically get the next value for any type with this characteristic, including UDT's that overload opAdd, as long as they can be evaluated at compile time.


> char [], structs, and floating-point types don't have that behaviour.

You're right about char[], but structs can have that behavior, and fp types do. This means that:

    enum Color : string { Red="red", Blue }

would not compile, whereas:

    enum Color : string { Red="red", Blue="blue" }

would.

> You're guaranteed that integral types remain as a special case.
> 
> In another post, you mentioned that this would become allowed, to reduce the effect of the special case:
> enum : float { A=2, B, C }

The above example would result in A being 2.0f, B being 3.0f, and C being 4.0f.

> Which leads to the same nonsense you get with operator ++ on floats; a++ is not necessarily different to a. I would hope that this feature never actually gets used.

You're right that, for nans, infinity, and very small values, (a+1)==a. But this is an inescapable of reality with fp arithmetic, and we don't disallow fp arithmetic because of it. We could, though, add a check that if the 'next' value doesn't change after being incremented, an error message is produced. There already is an error generated if the value being incremented is equal to the underlying type's .max, which prevents unintended overflows.

> Does this compile?
> 
> enum : cfloat { A=2, B, C }

Yes. A would be a cfloat with value 2.0+0i, B would be 3.0+0i, C would be 4.0+0i. There isn't a special case, all it does is:
 (next value)=(previousvalue) + 1
and run it through the usual semantic analysis. I don't know of a cause where one would want to declare complex constants this way, but the aim here is consistency.


> The important point is that you are now trying to minimise the effect of the special case which has been created. But there's no way to get rid of it, because it is fundamental to the nature of enums.

I don't understand exactly what the created special case is you're referring to. The enhanced enums remove the special case that restricted enums to being integral types.

> AFAICT there's also special cases related to the grouping (named vs unnamed enums) and with regard to the typing (name mangling).

Anonymous enums don't have a type (and didn't in D1.0, either). There's no way to get a grip on such a type anyway, as it has no name and:
	enum { FOO, BAR } x;
style declarations are not allowed in D.

Let me enumerate (!) the enhancements to enum:

1) The enum 'base type' is no longer restricted to being integral types only.
2) Members of anonymous enums can now be of heterogeneous types, the types being deduced from their initializers.
3) .init, .min and .max have no meaning for anonymous enums, and so are computed only for tagged enums.
4) For anonymous enum members, a type can prefix the identifier as a convenience.
5) If there is only one member in an anonymous enum, the { } can be omitted.
6) If .init, .min, .max or 'next' values are not required, then the base type doesn't have to support the operations required to produce those values.

None of these are takeaways.
December 29, 2007
Why not keeping enum as it was and use manfifest as enhanced enum ?
Seems to be a clean solution.
Bjoern