View mode: basic / threaded / horizontal-split · Log in · Help
March 25, 2010
Re: An idea (Re: Implicit enum conversions are a stupid PITA)
"Regan Heath" <regan@netmail.co.nz> wrote in message 
news:hogaop$2kp2$1@digitalmars.com...
> Nick Sabalausky wrote:
>> Here's the low-hanging fruit I see:
>>
>> Step 1: Remove implicit enum->base-type conversions
>> Step 2: Allow '|' (and maybe '&'?) on enums, and consider the result of 
>> the operation be the base type.
>
> I would prefer the result of Step 2 to be the enum type, not the base 
> type(*)
>

Agreed, but to do that correctly, the compiler would have to be able to 
distinguish between flag/bitfield-type enums and other enums, because many 
enums are *not* intended to be combinable and trying to do so should be an 
error. That's why I suggested the above as a low-hanging-fruit compromise.

But yea, if Walter were fine with taking it further and having that proper 
separation of flag/bitfield enums and non-flag/non-bitfield enums, then all 
the better.
March 25, 2010
Re: Implicit enum conversions are a stupid PITA
"Walter Bright" <newshound1@digitalmars.com> wrote in message 
news:hofb6v$h5g$1@digitalmars.com...
>
> This kind of misunderstanding comes up all the time, it's why having exact 
> reproducible snippets of code is so important! Often the critical detail 
> is something that was omitted because the person thought it was either 
> obvious or irrelevant.

Yea, normally I'm in the habit of doing that for that very reason. Figures 
it would bite me the one time I don't.

Anyway, this is what I'm doing, and it's giving me a conflict error on the 
call to 'bar' in 'main' with DMD 1.056 (fortunately, however, it seems to 
work fine in 2.042, so I guess the situation's improved in D2):


// -------- separateLibrary.d -------- 
module separateLibrary;
string makeFooBar(string name)
{
   return
   "
   enum "~name~" { foo }
   void bar("~name~" e) {}
   ";
}
mixin(makeFooBar("FooA"));


// -------- appModule.d -------- 
module appModule;
import separateLibrary;
mixin(makeFooBar("FooB"));


// -------- main.d -------- 
module main;
import separateLibrary;
import appModule;

void main()
{
   bar(FooB.foo); // Error! 'bar' conflict
}


Compiled with "rdmd main"
March 25, 2010
Re: Implicit enum conversions are a stupid PITA
On 03/25/2010 02:00 PM, Walter Bright wrote:
>>
>> ------- a.d -------------------
>> enum FooA { fooA };
>> void bar(FooA x) {}
>> ------- test.d ----------------
>> import a;
>> mixin(`
>> enum FooB { fooB };
>> void bar(FooB x) {}
>> `);
>>
>> void test()
>> {
>> bar(FooA.fooA); //error
>> bar(FooB.fooB);
>> }
>> ------------------------------
>
>
> This error is quite correct (and happens with or without the mixin). The
> idea is if you define a function in a local scope, it *completely hides*
> functions with the same name in imported scopes. In order to overload
> local functions with imported ones, an alias is necessary to bring the
> imported functions into the same scope.

I know. It was my understanding that this entire thread was a complaint 
of this behavior.
>
> This allows the user complete control over which functions get
> overloaded when they come from diverse, and possibly completely
> independent, imports.
>

It's a very explicit form of control.

Suppose I have 10 000 enums (I don't have quite that many, but there are 
a lot), each of which is in a different module because each is logically 
connected to something else. Because enums in D are too spartan for my 
tastes, I define a mixin to generate an enum, along with several helper 
functions, like name, fromInt, toInt, etc. (Note that a way to enumerate 
an enum's values would obviate all this, and __traits isn't an option in D1)

It's my understanding that enums are strongly enough typed that calling 
such a helper function with an enum (or anything else) not defined for 
it would generate an error. So there shouldn't be any way to hijack it.

But if I have some enum A defined in my current module and I also want 
to use enums modl2.B, modl3.C, etc, then I have to manually add all those

alias modl2.name name;

<poke> or else write an IDE tool that automatically generates them for 
me </poke>

I guess what I'm trying to say is it doesn't make sense that I can 
implicitly import FooA, an external symbol, but not bar(FooA), an 
external symbol defined on an external symbol which cannot be implicitly 
converted to a local symbol.

>
>
>> ------- a.d -------------------
>> enum FooA { fooA };
>> void bar(FooA x) {}
>> ------- test.d ----------------
>> import a;
>> alias a.bar bar;
>> mixin(`
>> enum FooB { fooB };
>> void bar(FooB x) {}
>> `);
>>
>> void test()
>> {
>> bar(FooA.fooA);
>> bar(FooB.fooB); //error
>> }
>> ------------------------------
>
> I'm not sure why this error is happening, it definitely has something to
> do with the mixin. Let me look into it some more.
March 25, 2010
Re: Implicit enum conversions are a stupid PITA
It seems that on a conceptual level we are in complete agreement. 
the difference seems to be that you want to push some things onto the user which I think the language should provide.

Walter Bright Wrote:

> yigal chripun wrote:
> > Walter Bright Wrote:
> > 
> >> Yigal Chripun wrote:
> >>> Walter Bright Wrote:
> >>>> Pascal has explicit casts. The integer to character one is CHR(i), the 
> >>>> character to integer is ORD(c).
> >>> I meant implicit, sorry about that. The pascal way is definitely the
> >>> correct way. what's the semantics in your opinion of ('f' + 3) ? what
> >>> about ('?' + 4)? making such arithmetic valid is wrong.
> >> Yes, that is exactly the opinion of Pascal. As I said, I've programmed in 
> >> Pascal, suffered as it blasted my kingdom, and I don't wish to do that
> >> again. I see no use in pretending '?' does not have a numerical value that
> >> is very useful to manipulate.
> >> 
> > 
> > '?' indeed does *not* have a single numerical value that identiies it in a
> > unique manner. You can map it to different numeric values based on encoding
> > and even within the same encoding this doesn't always hold. See normalization
> > in Unicode for different encodings for the same character.
> 
> 
> That's true, '?' can have different encodings, such as for EBCDIC and RADIX50. 
> Those formats are dead, however, and ASCII has won. D is specifically a Unicode 
> language (a superset of ASCII) and '?' has a single defined value for it.
> 
> Yes, Unicode has some oddities about it, and the poor programmer using those 
> characters will have to deal with it, but that does not change that quoted 
> character literals are always the same numerical value. '?' is not going to 
> change to another one tomorrow or in any conceivable future incarnation of Unicode.
> 

while it's true that '?' has one unicode value for it, it's not true for all sorts of diacritics and combine code-points. So your approach is to pass the responsibility for that to the end user which in 99.9999% will not handle this correctlly. 

> >> Naturally, either everyone invents their own aliases (like they do in C
> >> with its indeterminate int sizes), or they are standardized, in which case
> >> we're back to pretty much exactly the same point we are at now. I don't see
> >> where anything was accomplished.
> >> 
> > Not true. say I'm using my own proprietary hardware and I want to have
> > bits!24. How would I do that in current D?
> 
> You'd be on your own with that. I had a discussion recently with a person who 
> defended C's notion of compiler defined integer sizes, pointing out that this 
> enabled compliant C compilers to be written for DSLs with 32 bit bytes. That is 
> pedantically correct, compliant C compilers were written for it. Unfortunately, 
> practically no C applications could be ported to it without extensive modification!
> 
> For your 24 bit machine, you will be forced to write all your own custom 
> software, even if the D specification supported it.
> 
I completely agree with you that the C notion isn't good for integral types. It would only make sense when you use bits kind of type, where you'd see size_t in D (not common in user code).
Of course any software that depends on a specifc size, e.g. bits!32 will need to be extensively modified if it's ported to an arch which requires a different size. But I'm talking about the need to define bits!(T) myself instead of having it in the standard library.

> 
> > what if new hardware adds support
> > for larger vector ops and 512bit registers, will we now need to extend the
> > language with another type?
> 
> D will do something to accommodate it, obviously we don't know what that will be 
> until we see what those types are and what they do. What I don't see is using 
> 512 bit ints for normal use.
> 

There's another issue here and that's all those types are special cases in the compiler and handled separately from library types. 
Had the stdlib provided the templeted types it would have allowed to use them in more generic ways instead of special caseing them everywhere. 

> 
> 
> >>> char and relatives should be for text only per Unicode, (perhaps a better
> >>>  name is code-point).
> >> There have been many proposals to try and hide the fact that UTF-8 is
> >> really a multibyte encoding, but that makes for some pretty inefficient
> >> code in too many cases.
> > 
> > I'm not saying we should hide that, on the contrary, the compiler should
> > enforce unicode and other encodings should use a bits type instead. a
> > [w|d]char must always contain a valid unicode value. calling char[] a string
> > is wrong since it is actually an array of code-points which is not always a
> > valid encoding. a dchar[] is however a valid string since each individual
> > dchar contains a full code-unit.
> 
> Conceptually, I agree, it's wrong, but it's not practical to force the issue.

> 
> 
> >>> enum should be an enumeration type. You can find an excellent
> >>> strongly-typed design in Java 5.0
> >> Those enums are far more heavyweight - they are a syntactic sugar around a
> >> class type complete with methods, interfaces, constructors, etc. They
> >> aren't even compile time constants! If you need those in D, it wouldn't be
> >> hard at all to make a library class template that does the same thing.
> >> 
> > 
> > They aren't that heavy weight. Instead of assigning an int to each symbol you
> > assign a pointer address which is the same size.
> 
> No, it's not the same. A compile time constant has many advantages over a 
> runtime one. Java creates an inner class for each enum member, not just the enum 
> itself! It's heavyweight.
> 
> > Regarding the compile time
> > property: for an int type: const int a = 5; //compile time the same should
> > apply to enums as well.
> 
> Having a runtime pointer to an inner class for each enum value is far from the 
> advantages of a compile time constant.
> 
I don't understand this point - can't the inner class be put in the data segment? Also, why not use structs instead of classes? 
> 
> > The problem with the library solution is that it can't provide the syntax
> > sugar for this.
> 
> It can get pretty close. Java has poor abstraction facilities, and so building 
> it into the language was the only solution.

how close can it get? I don't mind having this in the stdlib instead of in the language if it's pleasent enough on the eyes.
March 25, 2010
Re: Implicit enum conversions are a stupid PITA
On 03/25/2010 01:08 PM, Walter Bright wrote:
>
> There are thousands of languages out there. If I did due diligence
> researching them all, I'd never finish, as new languages get created
> faster than anyone could ever study them. At some point, you've gotta
> pick and choose what you're going to look at in depth. Each language
> family is based on a core set of principles that get carried from one
> version to the next. Pascal's core set is unbearably restrictive to me.
> Sure, a lot of people strongly disagree, and that's fair, it is
> subjective, after all.
>
> Furthermore, like I said, anyone can propose features from any language
> they feel would make a good fit for D. None will be automatically
> rejected just because it came from a Pascal family language.

What do you think of Erlang's bit syntax if you've looked at it, or 
could you if you haven't?

Beyond the syntax, which is totally incompatible with D, I think it's 
pretty nifty and would be a sweet thing to have as a library template at 
the least.
March 25, 2010
Re: Implicit enum conversions are a stupid PITA
Ellery Newcomer wrote:
> I guess what I'm trying to say is it doesn't make sense that I can 
> implicitly import FooA, an external symbol, but not bar(FooA), an 
> external symbol defined on an external symbol which cannot be implicitly 
> converted to a local symbol.

And I believe it makes perfect sense! Everywhere else in the language, when you 
define a local name it *overrides* names in other scopes, it doesn't overload them.
March 25, 2010
Re: Implicit enum conversions are a stupid PITA
On Mar 26, 10 05:46, yigal chripun wrote:
>
> while it's true that '?' has one unicode value for it, it's not true for all sorts of diacritics and combine code-points. So your approach is to pass the responsibility for that to the end user which in 99.9999% will not handle this correctlly.
>

Non-issue. Since when can a character literal store > 1 code-point?
March 25, 2010
Re: Implicit enum conversions are a stupid PITA
"Walter Bright" <newshound1@digitalmars.com> wrote in message 
news:hogmgm$oco$1@digitalmars.com...
> Ellery Newcomer wrote:
>> I guess what I'm trying to say is it doesn't make sense that I can 
>> implicitly import FooA, an external symbol, but not bar(FooA), an 
>> external symbol defined on an external symbol which cannot be implicitly 
>> converted to a local symbol.
>
> And I believe it makes perfect sense! Everywhere else in the language, 
> when you define a local name it *overrides* names in other scopes, it 
> doesn't overload them.

Well, the result of that is that I'm forced to make my "genEnum" library 
utility generate "enum{name of enum}ToString({name of enum} e)" instead of 
"enumToString({name of enum} e)" or else users won't be able to use it 
without a bunch of odd alias contortions that I'm not sure I can wave away 
by including them in the original mixin. (I would have just called it 
"toString", but at the time, that had been giving me some strange troubles 
so I changed it to "enumToString" instead. In retrospect, it was probably 
giving me trouble because of this very same issue.)

Of course, I can live with this as long as it's fixed in D2 (as seemed to be 
the case from the three-module test code I put in another post). But now 
that you're saying this I'm confused as to why that example I posted 
suddenly worked when I compiled it with D2.
March 25, 2010
Re: Implicit enum conversions are a stupid PITA
Nick Sabalausky wrote:
> But now 
> that you're saying this I'm confused as to why that example I posted 
> suddenly worked when I compiled it with D2.

I haven't looked at that yet, just discussed the principle.
March 25, 2010
Re: Implicit enum conversions are a stupid PITA
bearophile wrote:
> Regarding base type names I have proposed :
> byte => sbyte
> wchar => char16 (or shortchar)
> dchar => char32 (or intchar)


Yes, we can endlessly rename keywords, but in the end, what does that accomplish 
that would compensate for upending every D program in existence?
3 4 5 6 7 8 9 10
Top | Discussion index | About this forum | D home