September 01, 2003
"Walter" <walter@digitalmars.com> ha scritto nel messaggio news:biphsq$1o9r$1@digitaldaemon.com...
> Converting one calling convention to another would require the compiler to generate a function to do it, as in:
>
>     int foo(int a, int b, int c)
>     {
>         int theotherfoo(int a, b, c);
>     }
>
> Your item 1 may be the best solution.

What about performance? Is it maybe better to explicitly enable, or being able to disable, calling performance deduction via a compiler flag?

Ric


September 01, 2003
"Riccardo De Agostini" <riccardo.de.agostini@email.it> wrote in message news:biuv93$nt2$1@digitaldaemon.com...
> "Walter" <walter@digitalmars.com> ha scritto nel messaggio news:biphsq$1o9r$1@digitaldaemon.com...
> > Converting one calling convention to another would require the compiler
to
> > generate a function to do it, as in:
> >
> >     int foo(int a, int b, int c)
> >     {
> >         int theotherfoo(int a, b, c);
> >     }
> >
> > Your item 1 may be the best solution.
>
> What about performance? Is it maybe better to explicitly enable, or being able to disable, calling performance deduction via a compiler flag?

While that's possible, I'm philosophically opposed to having compiler flags change the semantics of the language. There are innumerable such flags on typical C++ compilers, which cause endless problems.


September 01, 2003
I don't want a compiler flag. I want the language to deduce the right behaviour for me.

"Walter" <walter@digitalmars.com> wrote in message news:biuvsu$on5$1@digitaldaemon.com...
>
> "Riccardo De Agostini" <riccardo.de.agostini@email.it> wrote in message news:biuv93$nt2$1@digitaldaemon.com...
> > "Walter" <walter@digitalmars.com> ha scritto nel messaggio news:biphsq$1o9r$1@digitaldaemon.com...
> > > Converting one calling convention to another would require the
compiler
> to
> > > generate a function to do it, as in:
> > >
> > >     int foo(int a, int b, int c)
> > >     {
> > >         int theotherfoo(int a, b, c);
> > >     }
> > >
> > > Your item 1 may be the best solution.
> >
> > What about performance? Is it maybe better to explicitly enable, or
being
> > able to disable, calling performance deduction via a compiler flag?
>
> While that's possible, I'm philosophically opposed to having compiler
flags
> change the semantics of the language. There are innumerable such flags on typical C++ compilers, which cause endless problems.
>
>


September 01, 2003
"Walter" <walter@digitalmars.com> ha scritto nel messaggio news:biuvsu$on5$1@digitaldaemon.com...
> While that's possible, I'm philosophically opposed to having compiler
flags
> change the semantics of the language. There are innumerable such flags on typical C++ compilers, which cause endless problems.

I agree on that. How about defining a finite (and very small) set of
language feature subsets, thus giving us only one switch to select the
desired subset, in order to, for instance, compile for a slower target
(typically an embedded system) being sure that what the compiler does under
the hood doesn't result in performance loss in critical sections of the
code?
The need arises, funny enough, from one of the best features in DMD, i.e.
being free of the virtual keyword, not to mention calling-convention
polymorphism...
OTOH, I realize that defining the subsets is a good starting point for holy
wars :-) besides being, maybe, unpractical for other reasons. I'd really
like feedback on this from people more experienced than I...

Ric


September 02, 2003
"Riccardo De Agostini" <riccardo.de.agostini@email.it> wrote in message news:biv4q0$v71$1@digitaldaemon.com...
> "Walter" <walter@digitalmars.com> ha scritto nel messaggio news:biuvsu$on5$1@digitaldaemon.com...
> > While that's possible, I'm philosophically opposed to having compiler
> flags
> > change the semantics of the language. There are innumerable such flags
on
> > typical C++ compilers, which cause endless problems.
>
> I agree on that. How about defining a finite (and very small) set of language feature subsets, thus giving us only one switch to select the desired subset, in order to, for instance, compile for a slower target (typically an embedded system) being sure that what the compiler does
under
> the hood doesn't result in performance loss in critical sections of the code?

You can make a function 'final' and it will be non-virtual.

> The need arises, funny enough, from one of the best features in DMD, i.e.
> being free of the virtual keyword, not to mention calling-convention
> polymorphism...
> OTOH, I realize that defining the subsets is a good starting point for
holy
> wars :-) besides being, maybe, unpractical for other reasons. I'd really like feedback on this from people more experienced than I...
>
> Ric
>
>


September 03, 2003
"Walter" <walter@digitalmars.com> ha scritto nel messaggio news:bj334s$ecp$1@digitaldaemon.com...

> You can make a function 'final' and it will be non-virtual.

While that may be a solution, or better a workaround, I don't think this is the purpose of "final" functions. Anyway, for the sake of consistent language semantics, I think I can live with automatic calling convention deduction after all. :)

Ric


1 2
Next ›   Last »