Search
How to "extend" built-in types
Oct 26, 2012
simendsjo
Oct 26, 2012
Dan
Oct 26, 2012
Jonathan M Davis
Oct 27, 2012
simendsjo
Oct 27, 2012
Jonathan M Davis
Oct 27, 2012
simendsjo
Oct 27, 2012
Jonathan M Davis
```Not sure if this is a bug or intended behavior:

import std.traits;
struct S {
int i;
T opCast(T)() if(isFloatingPoint!T) {
return cast(T)i;
}
}

template myIsFloatingPoint(T) {
enum myIsFloatingPoint =  isFloatingPoint!T
|| __traits(compiles, { cast(real)T.init; });
}

void main() {
auto s = S(10);
assert(cast(real)s == 10.0);
// static assert( isFloatingPoint!S); // false
static assert( myIsFloatingPoint!S);
static assert( myIsFloatingPoint!float);
static assert(!myIsFloatingPoint!int);
}

Think of a Fraction or RangedInt struct for instance:

import std.traits;

struct Ranged(T, T minValue, T maxValue)
if(isNumeric!T)
{
enum T max = maxValue;
enum T min = minValue;
T value = min;

invariant() {
assert(value >= min && value <= max);
}

T opCast()() {
return value;
}

void opAssign(T value) {
this.value = value;
}

bool opEquals(T value) {
return this.value == value;
}
}

void f(int v) {}

void g(T)(T v) if(isNumeric!T) {
}

void main() {
Ranged!(int, 10, 20) v;
assert(v == 10);
v = 20;
//v = 21; // assert as expected
// f(v); // ok if we would use an alias this, but then the
Ranged isn't in effect anymore
//g(v); // oops.. Ranged is not numeric

}

So.. What do I need to implement for a struct to be a valid
built-in type?
All valid properties (min, max etc) and operators for that type?
```
```On Friday, 26 October 2012 at 13:55:35 UTC, simendsjo wrote:
> Not sure if this is a bug or intended behavior:
> ...
> So.. What do I need to implement for a struct to be a valid
> built-in type?
> All valid properties (min, max etc) and operators for that type?

I am looking for something similar. I ended up looking at Proxy in typecons because the description says "// Enable operations that original type has". So for instance it looks like you could wrap an int or double in a struct and it is almost exactly like an int or a double. Presumably you could then overload just the changes (min/max for range, etc). I've found issues for my case, but at least its worth a look.

http://forum.dlang.org/post/ycayokcwncqfwfstomck@forum.dlang.org

If I try to check that this struct is numeric, unfortunately it is not.
writeln("Is numeric ", isNumeric!CcRate); // false
writeln("Is numeric ", isNumeric!double); // true

-----
struct CcRate {
private double rate = 0;
mixin Proxy!rate;

this(double rate) {
this.rate = rate;
}
}
----

Thanks
Dan

```
```On Friday, October 26, 2012 15:55:34 simendsjo wrote:
> So.. What do I need to implement for a struct to be a valid
> built-in type?
> All valid properties (min, max etc) and operators for that type?

So, you want stuff like isFloatingPoint and isNumeric to return true for a user-defined struct? That pretty much defeats their purpose if that happens. They're checking for exact matches, not implicit conversions, and a function intended to work explicitly with float isn't necessarily going to work with your struct, so it needs isFloatingPoint to be false for something that isn't truly a built-in floating point type.

- Jonathan M Davis
```
```On Friday, 26 October 2012 at 16:32:29 UTC, Jonathan M Davis wrote:
> On Friday, October 26, 2012 15:55:34 simendsjo wrote:
>> So.. What do I need to implement for a struct to be a valid
>> built-in type?
>> All valid properties (min, max etc) and operators for that type?
>
> So, you want stuff like isFloatingPoint and isNumeric to return true for a
> user-defined struct? That pretty much defeats their purpose if that happens.
> They're checking for exact matches, not implicit conversions, and a function
> intended to work explicitly with float isn't necessarily going to work with
> your struct, so it needs isFloatingPoint to be false for something that isn't
> truly a built-in floating point type.

Ok, maybe a bad example - I understand why those templates check for specific types.
The thing is that I often doesn't really care about the type, only that it exposes certain properties.
I don't want to force an int as parameter unless I need *all* the properties of an int. Often a method only requires that it have integer division and the mod operator or similar.
I can omit the constraint completely, but error messages isn't as good, the method is less explanatory, it gives less safety, and overloading becomes impossible.

I'd like the user to be able to use more specific behavior if needed.
A Ranged value, NonNull, Fraction etc. are good examples.

```
```On Saturday, October 27, 2012 11:58:57 simendsjo wrote:
> The thing is that I often doesn't really care about the type, only that it exposes certain properties.

Then create a template constraint (or eponymous template to use in a template constraint) which tests for those properties. That's exactly what templates like isForwardRange and hasLength do for range-based operations. You just need the same sort of thing for the set operations that you require.

- Jonathan M Davis
```
```On Saturday, 27 October 2012 at 10:07:20 UTC, Jonathan M Davis wrote:
> On Saturday, October 27, 2012 11:58:57 simendsjo wrote:
>> The thing is that I often doesn't really care about the type,
>> only that it exposes certain properties.
>
> Then create a template constraint (or eponymous template to use in a template
> constraint) which tests for those properties. That's exactly what templates
> like isForwardRange and hasLength do for range-based operations. You just need
> the same sort of thing for the set operations that you require.

So something like this then?
Should the traits module be extended with templates to query for certain behavior?

template hasDivision(T) {
enum hasDivision = isNumeric!T || __traits(compiles, {T.init/1;});
}

template castsTo(T, C) {
enum castsTo = __traits(compiles, {cast(C)T.init;});
}

template castsToIntegral(T) {
enum castsToIntegral =
castsTo!(T, byte)  || castsTo!(T, ubyte)
|| castsTo!(T, short) || castsTo!(T, ushort)
|| castsTo!(T, int)   || castsTo!(T, uint)
|| castsTo!(T, long)  || castsTo!(T, ulong);
}

template hasIntegerDivision(T) {
enum hasIntegerDivision = isIntegral!T
|| (hasDivision!T && castsToIntegral!(typeof(T.init/2)));
}
unittest {
assert( hasIntegerDivision!int);
assert(!hasIntegerDivision!float);

struct Int {
int i;
Int opBinary(string op, T)(T value) if(isIntegral!T && op == "/") {
return Int(i/value);
}

T opCast(T)() if(isIntegral!T) {
return cast(T)i;
}
}

assert( hasIntegerDivision!Int);
}

```
```On Saturday, October 27, 2012 12:34:28 simendsjo wrote:
> So something like this then?

Whatever you need for what you're trying to do. If your example templates test what you need tested, then they should work, though I confess that for something like division, it seems to me to be overkill to create a template constraint for it rather than simply testing for it directly in the template constraint, given how short, simple, and clear the test is.

> Should the traits module be extended with templates to query for certain behavior?

Only if they're very common and maybe even only if they're relatively hard. There are essentially infinite operations that you could be testing for, many of which are completely specific to your application and needs. Adding them all to std.traits would make no sense, and it would be very easy for std.traits to be cluttered with stuff that isn't really all that useful or which is easy enough to do yourself that adding it the standard library doesn't really help anyone. More can be (and probably should be) added to std.traits, but they need to solve a definite need and be worth having in the standard library.

- Jonathan M Davis
```