Jump to page: 1 2
Thread overview
bug?
Sep 02, 2002
Dario
Sep 04, 2002
Walter
Sep 04, 2002
Dario
Sep 04, 2002
Walter
Sep 04, 2002
Mac Reiter
proposal regarding intent
Sep 05, 2002
Sean L. Palmer
Sep 05, 2002
Sean L. Palmer
Sep 05, 2002
Alix Pexton
Sep 05, 2002
Sean L. Palmer
Sep 05, 2002
Mac Reiter
Sep 07, 2002
Walter
Sep 07, 2002
Walter
Sep 05, 2002
Mac Reiter
Sep 05, 2002
Sean L. Palmer
September 02, 2002
I'm not sure this is a bug.
When you sum (but not only when you sum) two ubytes you get an int.
Shouldn't I get a ubyte, since both operands are ubytes?

Look at this code to experiment it:

void func(int a)
{ printf("int\n"); }

void func(ubyte a)
{ printf("ubyte\n"); }

int main()
{
    ubyte a = 4, b = 8;
    func(a);
    func(b);
    func(a|b);
    func(a+b);
    return 0;
}


September 04, 2002
D follows the C rules for operand promotion, which makes the result an int. -Walter

"Dario" <supdar@yahoo.com> wrote in message news:al0c4t$15an$1@digitaldaemon.com...
> I'm not sure this is a bug.
> When you sum (but not only when you sum) two ubytes you get an int.
> Shouldn't I get a ubyte, since both operands are ubytes?
>
> Look at this code to experiment it:
>
> void func(int a)
> { printf("int\n"); }
>
> void func(ubyte a)
> { printf("ubyte\n"); }
>
> int main()
> {
>     ubyte a = 4, b = 8;
>     func(a);
>     func(b);
>     func(a|b);
>     func(a+b);
>     return 0;
> }
>
>


September 04, 2002
> D follows the C rules for operand promotion, which makes the result an int. -Walter

Uh, you're right.
Anyway that's not a logic rule and can be dangerous when using function
overloading. But I guess you're not going to change D rules.


September 04, 2002
"Dario" <supdar@yahoo.com> wrote in message news:al4od0$7nh$1@digitaldaemon.com...
> > D follows the C rules for operand promotion, which makes the result an int. -Walter
> Uh, you're right.
> Anyway that's not a logic rule and can be dangerous when using function
> overloading. But I guess you're not going to change D rules.

The rationale for using the C rules is that old C programmers like me <g> are so used to them they are like second nature, for good or ill. Changing the way they work will, I anticipate, cause much grief because without thinking people will expect them to behave like C, and so will find D more frustrating than useful.


September 04, 2002
In article <al5lsn$21iq$1@digitaldaemon.com>, Walter says...
>"Dario" <supdar@yahoo.com> wrote in message news:al4od0$7nh$1@digitaldaemon.com...
>> > D follows the C rules for operand promotion, which makes the result an int. -Walter
>> Uh, you're right.
>> Anyway that's not a logic rule and can be dangerous when using function
>> overloading. But I guess you're not going to change D rules.
>
>The rationale for using the C rules is that old C programmers like me <g> are so used to them they are like second nature, for good or ill. Changing the way they work will, I anticipate, cause much grief because without thinking people will expect them to behave like C, and so will find D more frustrating than useful.

There is also, of course, the pragmatic reasoning that if you add two uchars, the result will not necessarily fit in a uchar any more (150 + 150 = 300, which requires at least a 'short', and ints are faster than shorts).  So you put the result in a bigger type so that the answer is correct.  Then if the programmer actually wanted to get a truncated result (which would be 44, unless I've screwed up somewhere along the way behind the scenes...), then he/she can always mask it off and/or cast it back down to a uchar later.

If this seems contrived, consider the following simple averaging code:

unsigned char a, b, c;
cin >> a >> b;
c = (a+b)/2;
cout << c;

Now, arguably, the value of (a+b)/2 can't ever be too large for a uchar. However, the compiler has to do this in stages, so it has to do (a+b) and store that somewhere, and then it has to do the /2 to get the result.  If the "type" of a+b was unsigned char, and a user input 150 and 150 to the preceding code snippet, the result would be 22 (44/2), which would probably surprise them a great deal.  With the staging, a+b converts up to an int, the math occurs, the second stage /2 occurs (also as int), and now the result (150) is held in an internal temporary int.  The assignment system looks at c and truncates/casts down to unsigned char (probably issuing a warning along the way), which is fine, since 150 fits in a uchar.  Everything is fine.

The compiler can't reasonably distinguish between this case and the function call case -- both are intermediate expressions.  Most compilers consider it more important to maintain mathematically correct results than to maintain types, since the programmer can always force the type conversion if necessary, but would not be able to generate the correct result if the compiler discarded it during an intermediate stage.  I used to do some Visual Basic programming, and sometimes they did some of this type enforcement, and it turned some pieces of code that should have been trivial into horribly complicated state machines...

(I apologize if this posting feels offensive to anyone.  I did not mean it to be such.  I am, unfortunately, somewhat rushed at the moment and do not have the time to run my normal "tone" filters over what just rolled out of my fingers...)

Mac


September 05, 2002
This still seems akin to me to forcing a type conversion to long or double in order to add two ints and divide by an int.  And then what's going to go on if you add two longs and divide by a long... it does it internally in extra extra long?  Where does it stop?

Just always seemed too arbitrary to me.  I think personally I'd advocate a system where it does the arithmetic in a value at least as large as the two input values so long as the result of the intermediate arithmetic would, at the end, fit back into the start type.  But not that it has to do the math in the larger registers.  Imposing a register type conversion forces extra mask operations into the code or forces the waste of valuable register space for architectures such as Intel that do a fairly good job at dealing with bytes registers efficiently.  Math runs easier in SIMD registers also if you don't have to ensure intermediate precision.  It definitely will slow things down to always require temporary arithmetic to be performed in int-sized registers... there's unpacking, then more of the larger ops (which take more time per op maybe and takes more ops as fewer can be done per cycle), then have to pack the result for storage.

I'd leave it completely unspecified, or introduce a language mechanism to control the intermediate result (such as casting to the desired size first). Either allow it to be controlled precisely or leave it completely undetermined, but please don't do it half-assed like C did.

One idea that came up in a previous discussion was a way to decorate a code block with attributes describing the desired goal of the optimizer... to optimize for speed, or space, or precision, or some combination.  (there may be other valid goals as well, some interoperable with the above, some competing with the above).  I'd love to standardize something like this since it's always been a matter of using different pragmas or language extensions on different compilers along with a combination of compiler switches and voodoo and wrap the whole thing in #ifdef THISCOMPILER >= THISVERSION crap.  At least we have the version statement but nothing specifically to direct the compiler yet is there?  Some syntax to give the compiler direct commands regarding parsing or code generation of the following or enclosed block.

It could be an attribute like final.  Something like

fast        // by fastest load/store and operation
small      // by smallest size (parm is how small it has to be)
precise   // by range or fractional accuracy or both
exact      // in number of bits sorted into categories (sign, exponent,
integer, mantissa, fraction)
wrap
clamp
bounded
boundschecked
unsafe

But that litters the global namespace.  Something more like

pragma fast(1), precise(1e64, 1e-5)
{
    for (;;);
}

Where the parameters are optional and default to something reasonable.

The compiler can then tailor how it handles the intermediates according to the high-level goals for the section;  if the goal is speed, avoid unnecessary conversions especially to slower types at the expense of precision.  If the goal is safety, do bounds checking and throw lots of exceptions.  If the goal is precision, do all arithmetic in precise intermediate format (extended maybe! or maybe just twice as big a register as the original types)

or possibly:

type numeric(fast(1), precise(1e64, 1e-5) mynumerictype;
type numeric(fast(1), precise(1, 1e-2)[4] small mycolortype;

mynumerictype a,b,c;
for (a = 1.0; (a -= 0.0125) >= 0; )
    plot(a, a^2, mycolortype{a^3, a^4, a^5, 1-a});

a couple more keywords might be the opposites of the above.

slow
large
imprecise
inexact
managed
safe

Not even sure what I'd use all those keywords for yet either.

But the general idea behind this proposal is to have a way to make the intent of the programmer known in a very general way.  Common goals for a programmer involve optimizing for speed, size, precision, safety.  The compiler has similar conflicting goals (commercial compilers do anyway) so why not standardize the interface to it.

Think of it as a formal way to give very broad hints, or requests, or strong suggestions to the compiler.  ;)

In any case I would not make the result type of adding two chars be any larger than the largest explicit scalar present in the calculation.

i.e.

byte b=8, c=9;
byte x = b + c; // stays in byte or larger in intermediate form
int y = b + c;  // intermediates done in int since result is int, and int is
more precise than byte

The most general spec should be that results can arbitrarily have more precision than you expected, even if this seems like an error;  relying on inaccuracies seems in general to be bad programming practice.

I want control over rounding modes and stuff like that in the language itself.  I want control over endianness, saturation; give me as much control as possible and I'll find a use for most of it.  ;)

Sean

"Mac Reiter" <Mac_member@pathlink.com> wrote in message news:al5onu$2865$1@digitaldaemon.com...
> In article <al5lsn$21iq$1@digitaldaemon.com>, Walter says...
> >"Dario" <supdar@yahoo.com> wrote in message news:al4od0$7nh$1@digitaldaemon.com...
> >> > D follows the C rules for operand promotion, which makes the result
an
> >> > int. -Walter
> >> Uh, you're right.
> >> Anyway that's not a logic rule and can be dangerous when using function
> >> overloading. But I guess you're not going to change D rules.
> >
> >The rationale for using the C rules is that old C programmers like me <g> are so used to them they are like second nature, for good or ill.
Changing
> >the way they work will, I anticipate, cause much grief because without thinking people will expect them to behave like C, and so will find D
more
> >frustrating than useful.
>
> There is also, of course, the pragmatic reasoning that if you add two
uchars,
> the result will not necessarily fit in a uchar any more (150 + 150 = 300,
which
> requires at least a 'short', and ints are faster than shorts).  So you put
the
> result in a bigger type so that the answer is correct.  Then if the
programmer
> actually wanted to get a truncated result (which would be 44, unless I've screwed up somewhere along the way behind the scenes...), then he/she can
always
> mask it off and/or cast it back down to a uchar later.
>
> If this seems contrived, consider the following simple averaging code:
>
> unsigned char a, b, c;
> cin >> a >> b;
> c = (a+b)/2;
> cout << c;
>
> Now, arguably, the value of (a+b)/2 can't ever be too large for a uchar.
> However, the compiler has to do this in stages, so it has to do (a+b) and
store
> that somewhere, and then it has to do the /2 to get the result.  If the
"type"
> of a+b was unsigned char, and a user input 150 and 150 to the preceding
code
> snippet, the result would be 22 (44/2), which would probably surprise them
a
> great deal.  With the staging, a+b converts up to an int, the math occurs,
the
> second stage /2 occurs (also as int), and now the result (150) is held in
an
> internal temporary int.  The assignment system looks at c and
truncates/casts
> down to unsigned char (probably issuing a warning along the way), which is
fine,
> since 150 fits in a uchar.  Everything is fine.
>
> The compiler can't reasonably distinguish between this case and the
function
> call case -- both are intermediate expressions.  Most compilers consider
it more
> important to maintain mathematically correct results than to maintain
types,
> since the programmer can always force the type conversion if necessary,
but
> would not be able to generate the correct result if the compiler discarded
it
> during an intermediate stage.  I used to do some Visual Basic programming,
and
> sometimes they did some of this type enforcement, and it turned some
pieces of
> code that should have been trivial into horribly complicated state
machines...
>
> (I apologize if this posting feels offensive to anyone.  I did not mean it
to be
> such.  I am, unfortunately, somewhat rushed at the moment and do not have
the
> time to run my normal "tone" filters over what just rolled out of my
fingers...)
>
> Mac



September 05, 2002
Oh one more thing: any pragma keywords should trigger a linked version-statement-testable flag so code can work in more than one way, depending on surrounding or calling goal directives.

Imagine if "fast" or "precise" made it into the name mangling for function overloading... or type overloading... or prerequisites, dbc!!!

template (T)
T operate_on(T a, T b)
    in
    {
        assert(T.version(precision > 20))
    }
    body
    {
        version(fast && pentium_7_and_three_quarters)
        {
            return exp(a * log(b))/b;
        }
        version(precise && !broken_lexer_Dvprior1_03)
        {
            return a ** b;
        }
    }

typedef instance (pragma fast int) fast_int;

pragma precise int res = operate_on(cast(fast_int)1, cast(fast_int)2);


Which leads me to comment about the contracts currently in D.  I don't see how it's semantically much different to write:

in
{
    assert(A);
}
out
{
    assert(B);
}
body
{
    C;
}

than it is to write:

{
    assert(A);
    C;
    assert(B);
}

And in fact the latter seems way more natural.  Maybe that shows off my background.  ;)  I'd think about how to put more into a contract than just some asserts.  Such as hooking up a specific error message to an assert (often used in production code BTW) and just listing some expressions that must all be true (no need for a bunch of assert "statements" inside an in or out contract.)

We also invariably want to intercept the assert behavior to draw some message on screen or something, and maybe attempt to resume or abort, or ignore any future assertions.  Hard to do that without language support. Some kind of global handle_assert function you can overload?

Can you overload the random number generator?  The garbage collector?  In a standard manner that'll work on most platforms with compilers from different vendors?

D already has 2 or 3 vendors, all at various stages on the way to D 1.0 spec.  ;)  I have only tried one.

Sean

"Sean L. Palmer" <seanpalmer@earthlink.net> wrote in message news:al75nt$26bl$1@digitaldaemon.com...
> This still seems akin to me to forcing a type conversion to long or double in order to add two ints and divide by an int.  And then what's going to
go
> on if you add two longs and divide by a long... it does it internally in extra extra long?  Where does it stop?
>
> Just always seemed too arbitrary to me.  I think personally I'd advocate a system where it does the arithmetic in a value at least as large as the
two
> input values so long as the result of the intermediate arithmetic would,
at
> the end, fit back into the start type.  But not that it has to do the math in the larger registers.  Imposing a register type conversion forces extra mask operations into the code or forces the waste of valuable register
space
> for architectures such as Intel that do a fairly good job at dealing with bytes registers efficiently.  Math runs easier in SIMD registers also if
you
> don't have to ensure intermediate precision.  It definitely will slow
things
> down to always require temporary arithmetic to be performed in int-sized registers... there's unpacking, then more of the larger ops (which take
more
> time per op maybe and takes more ops as fewer can be done per cycle), then have to pack the result for storage.
>
> I'd leave it completely unspecified, or introduce a language mechanism to control the intermediate result (such as casting to the desired size
first).
> Either allow it to be controlled precisely or leave it completely undetermined, but please don't do it half-assed like C did.
>
> One idea that came up in a previous discussion was a way to decorate a
code
> block with attributes describing the desired goal of the optimizer... to optimize for speed, or space, or precision, or some combination.  (there
may
> be other valid goals as well, some interoperable with the above, some competing with the above).  I'd love to standardize something like this since it's always been a matter of using different pragmas or language extensions on different compilers along with a combination of compiler switches and voodoo and wrap the whole thing in #ifdef THISCOMPILER >= THISVERSION crap.  At least we have the version statement but nothing specifically to direct the compiler yet is there?  Some syntax to give the compiler direct commands regarding parsing or code generation of the following or enclosed block.
>
> It could be an attribute like final.  Something like
>
> fast        // by fastest load/store and operation
> small      // by smallest size (parm is how small it has to be)
> precise   // by range or fractional accuracy or both
> exact      // in number of bits sorted into categories (sign, exponent,
> integer, mantissa, fraction)
> wrap
> clamp
> bounded
> boundschecked
> unsafe
>
> But that litters the global namespace.  Something more like
>
> pragma fast(1), precise(1e64, 1e-5)
> {
>     for (;;);
> }
>
> Where the parameters are optional and default to something reasonable.
>
> The compiler can then tailor how it handles the intermediates according to the high-level goals for the section;  if the goal is speed, avoid unnecessary conversions especially to slower types at the expense of precision.  If the goal is safety, do bounds checking and throw lots of exceptions.  If the goal is precision, do all arithmetic in precise intermediate format (extended maybe! or maybe just twice as big a register as the original types)
>
> or possibly:
>
> type numeric(fast(1), precise(1e64, 1e-5) mynumerictype;
> type numeric(fast(1), precise(1, 1e-2)[4] small mycolortype;
>
> mynumerictype a,b,c;
> for (a = 1.0; (a -= 0.0125) >= 0; )
>     plot(a, a^2, mycolortype{a^3, a^4, a^5, 1-a});
>
> a couple more keywords might be the opposites of the above.
>
> slow
> large
> imprecise
> inexact
> managed
> safe
>
> Not even sure what I'd use all those keywords for yet either.
>
> But the general idea behind this proposal is to have a way to make the intent of the programmer known in a very general way.  Common goals for a programmer involve optimizing for speed, size, precision, safety.  The compiler has similar conflicting goals (commercial compilers do anyway) so why not standardize the interface to it.
>
> Think of it as a formal way to give very broad hints, or requests, or
strong
> suggestions to the compiler.  ;)
>
> In any case I would not make the result type of adding two chars be any larger than the largest explicit scalar present in the calculation.
>
> i.e.
>
> byte b=8, c=9;
> byte x = b + c; // stays in byte or larger in intermediate form
> int y = b + c;  // intermediates done in int since result is int, and int
is
> more precise than byte
>
> The most general spec should be that results can arbitrarily have more precision than you expected, even if this seems like an error;  relying on inaccuracies seems in general to be bad programming practice.
>
> I want control over rounding modes and stuff like that in the language itself.  I want control over endianness, saturation; give me as much
control
> as possible and I'll find a use for most of it.  ;)
>
> Sean
>
> "Mac Reiter" <Mac_member@pathlink.com> wrote in message news:al5onu$2865$1@digitaldaemon.com...
> > In article <al5lsn$21iq$1@digitaldaemon.com>, Walter says...
> > >"Dario" <supdar@yahoo.com> wrote in message news:al4od0$7nh$1@digitaldaemon.com...
> > >> > D follows the C rules for operand promotion, which makes the result
> an
> > >> > int. -Walter
> > >> Uh, you're right.
> > >> Anyway that's not a logic rule and can be dangerous when using
function
> > >> overloading. But I guess you're not going to change D rules.
> > >
> > >The rationale for using the C rules is that old C programmers like me
<g>
> > >are so used to them they are like second nature, for good or ill.
> Changing
> > >the way they work will, I anticipate, cause much grief because without thinking people will expect them to behave like C, and so will find D
> more
> > >frustrating than useful.
> >
> > There is also, of course, the pragmatic reasoning that if you add two
> uchars,
> > the result will not necessarily fit in a uchar any more (150 + 150 =
300,
> which
> > requires at least a 'short', and ints are faster than shorts).  So you
put
> the
> > result in a bigger type so that the answer is correct.  Then if the
> programmer
> > actually wanted to get a truncated result (which would be 44, unless
I've
> > screwed up somewhere along the way behind the scenes...), then he/she
can
> always
> > mask it off and/or cast it back down to a uchar later.
> >
> > If this seems contrived, consider the following simple averaging code:
> >
> > unsigned char a, b, c;
> > cin >> a >> b;
> > c = (a+b)/2;
> > cout << c;
> >
> > Now, arguably, the value of (a+b)/2 can't ever be too large for a uchar.
> > However, the compiler has to do this in stages, so it has to do (a+b)
and
> store
> > that somewhere, and then it has to do the /2 to get the result.  If the
> "type"
> > of a+b was unsigned char, and a user input 150 and 150 to the preceding
> code
> > snippet, the result would be 22 (44/2), which would probably surprise
them
> a
> > great deal.  With the staging, a+b converts up to an int, the math
occurs,
> the
> > second stage /2 occurs (also as int), and now the result (150) is held
in
> an
> > internal temporary int.  The assignment system looks at c and
> truncates/casts
> > down to unsigned char (probably issuing a warning along the way), which
is
> fine,
> > since 150 fits in a uchar.  Everything is fine.
> >
> > The compiler can't reasonably distinguish between this case and the
> function
> > call case -- both are intermediate expressions.  Most compilers consider
> it more
> > important to maintain mathematically correct results than to maintain
> types,
> > since the programmer can always force the type conversion if necessary,
> but
> > would not be able to generate the correct result if the compiler
discarded
> it
> > during an intermediate stage.  I used to do some Visual Basic
programming,
> and
> > sometimes they did some of this type enforcement, and it turned some
> pieces of
> > code that should have been trivial into horribly complicated state
> machines...
> >
> > (I apologize if this posting feels offensive to anyone.  I did not mean
it
> to be
> > such.  I am, unfortunately, somewhat rushed at the moment and do not
have
> the
> > time to run my normal "tone" filters over what just rolled out of my
> fingers...)
> >
> > Mac
>
>
>


September 05, 2002
Sean L. Palmer wrote:
The difference is that the D compiler strips the "in" and "out" blocks when you compile with the "release" flag

or so is my understanding...

Alix...
webmaster "the D journal"

> Which leads me to comment about the contracts currently in D.  I don't see
> how it's semantically much different to write:
> 
> in
> {
>     assert(A);
> }
> out
> {
>     assert(B);
> }
> body
> {
>     C;
> }
> 
> than it is to write:
> 
> {
>     assert(A);
>     C;
>     assert(B);
> }

September 05, 2002
In article <al75nt$26bl$1@digitaldaemon.com>, Sean L. Palmer says...
>This still seems akin to me to forcing a type conversion to long or double in order to add two ints and divide by an int.  And then what's going to go on if you add two longs and divide by a long... it does it internally in extra extra long?  Where does it stop?
>
>Just always seemed too arbitrary to me.  I think personally I'd advocate a system where it does the arithmetic in a value at least as large as the two input values so long as the result of the intermediate arithmetic would, at the end, fit back into the start type.  But not that it has to do the math in the larger registers.  Imposing a register type conversion forces extra
[clip]
>In any case I would not make the result type of adding two chars be any larger than the largest explicit scalar present in the calculation.
>
>i.e.
>
>byte b=8, c=9;
>byte x = b + c; // stays in byte or larger in intermediate form
>int y = b + c;  // intermediates done in int since result is int, and int is
>more precise than byte

byte d=150, e=230, f=2;
byte x=(b+c)/f; // hosed.  Your rule requires the intermediates to be bytes,
// since nothing but bytes are used in this calculation.
// Bytes are simply not big enough to do math with bytes.

And as for speed, it doesn't matter how much faster you can make it if the answer is wrong.  NOPs run really fast, and they give you an answer that is no more wrong than the above code would...

I *do* like your ideas about standardizing ways to tell the compiler about your intent.  I would prefer that certain of these "hints" actually be required -- boundschecked should require boundschecking, rather than just suggesting that "it might be a nice idea to check those arrays, if the compiler isn't too busy...".  I generally hate hints -- if I took the time to tell the compiler to do something, it should either do it or throw an error if it is physically impossible to do it (attempts to inline recursive functions, for example).

I agree that it is sometimes preferable to avoid the intermediate casts for performance reasons.  I just think that the vast majority of code expects correct answers first, with optimization second.  One thing I can think of to give the programmer the ability to turn off conversions would be something that looked like a cast, along the lines of: (I'm sure the exact syntax would have to be different)

byte x = expr(byte)(b+c) / f;

or, if you get really explicit:

byte x = expr(byte)( expr(byte)(b+c) / f);

although the outer expr() is not really needed, since the result immediately
goes into a byte.

expr() is kinda like cast(), except that cast() would evaluate the expression in
whatever types it wanted and then cast it afterwards.  expr() would tell the
compiler what type to use for evaluating the expression, and that the programmer
knows the cost and consequences and wants it to behave that way.

That way, when the average programmer writes out a math formula, it generates the right answer (as long as the machine is physically capable of doing so -- I do *not* advocate implicitly doing 'long' math with software emulated bignum routines for the intermediate expressions).  But when the advanced programmer really needs to tweak an expression down and either knows that the result will fit or explicitly wants the wrapping/truncation behavior, he/she can use expr() to force the behavior.

expr() is probably a really bad name.  It keeps bringing up lisp and scheme memories...  Unfortunately, it is the best thing I could think of on spur of the moment.

Mac


September 05, 2002
It strips asserts anyway in Release.

Sean

"Alix Pexton" <Alix@seven-point-star.co.uk> wrote in message news:al7a6t$2gha$1@digitaldaemon.com...
> Sean L. Palmer wrote:
> The difference is that the D compiler strips the "in" and "out" blocks
> when you compile with the "release" flag
>
> or so is my understanding...
>
> Alix...
> webmaster "the D journal"
>
> > Which leads me to comment about the contracts currently in D.  I don't
see
> > how it's semantically much different to write:
> >
> > in
> > {
> >     assert(A);
> > }
> > out
> > {
> >     assert(B);
> > }
> > body
> > {
> >     C;
> > }
> >
> > than it is to write:
> >
> > {
> >     assert(A);
> >     C;
> >     assert(B);
> > }
>


« First   ‹ Prev
1 2