| Thread overview | |||||||||
|---|---|---|---|---|---|---|---|---|---|
|
July 03, 2015 bigint compile time errors | ||||
|---|---|---|---|---|
| ||||
The following code fails to compile and responds with the given error message. Varying the "plusTwo" function doesn't work; as long as there is an arithmetic operation the error occurs.
It seems to mean that there is no way to modify a BigInt at compile time. This seriously limits the usability of the type.
enum BigInt test1 = BigInt(123);
enum BigInt test2 = plusTwo(test1);
public static BigInt plusTwo(in bigint n)
{
return n + 2;
}
void main()
{
}
Error message:
C:\D\dmd2\windows\bin\..\..\src\phobos\std\internal\math\biguintx86.d(226): Error: asm statements cannot be interpreted at compile time
C:\D\dmd2\windows\bin\..\..\src\phobos\std\internal\math\biguintcore.d(1248): called from here: multibyteIncrementAssign(result[0..__dollar - 1u], lo)
C:\D\dmd2\windows\bin\..\..\src\phobos\std\internal\math\biguintcore.d(515): called from here: addInt(x.data, y)
C:\D\dmd2\windows\bin\..\..\src\phobos\std\bigint.d(118): called from here: addOrSubInt(this.data, u, cast(int)this.sign != cast(int)(y < 0u), this.sign)
C:\D\dmd2\windows\bin\..\..\src\phobos\std\bigint.d(118): called from here: addOrSubInt(this.data, u, cast(int)this.sign != cast(int)(y < 0u), this.sign)
C:\D\dmd2\windows\bin\..\..\src\phobos\std\bigint.d(258): called from here: r.opOpAssign(y)
called from here: n.opBinary(2)
called from here: plusTwo(BigInt(BigUint([123u], false))
| ||||
July 03, 2015 Re: bigint compile time errors | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Paul D Anderson | On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson wrote: > The following code fails to compile and responds with the given error message. Varying the "plusTwo" function doesn't work; as long as there is an arithmetic operation the error occurs. This works for me on OSX 10.10 (Yosemite) using DMD64 D Compiler v2.067.1. > It seems to mean that there is no way to modify a BigInt at compile time. This seriously limits the usability of the type. > > enum BigInt test1 = BigInt(123); > enum BigInt test2 = plusTwo(test1); > > public static BigInt plusTwo(in bigint n) Should be plusTwo(in BigInt n) instead. > { > return n + 2; > } > > void main() > { > } > | |||
July 03, 2015 Re: bigint compile time errors | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Anon | On Friday, 3 July 2015 at 03:57:57 UTC, Anon wrote: > On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson wrote: >> enum BigInt test1 = BigInt(123); >> enum BigInt test2 = plusTwo(test1); >> >> public static BigInt plusTwo(in bigint n) > > Should be plusTwo(in BigInt n) instead. > Yes, I had aliased BigInt to bigint. And I checked and it compiles for me too with Windows m64. That makes it seem more like a bug than a feature. I'll open a bug report. Paul | |||
July 03, 2015 Re: bigint compile time errors | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Paul D Anderson | On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson wrote: > The following code fails to compile and responds with the given error message. Varying the "plusTwo" function doesn't work; as long as there is an arithmetic operation the error occurs. > > [...] https://issues.dlang.org/show_bug.cgi?id=14767 | |||
July 05, 2015 Re: bigint compile time errors | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Paul D Anderson | On Friday, 3 July 2015 at 04:08:32 UTC, Paul D Anderson wrote: > On Friday, 3 July 2015 at 03:57:57 UTC, Anon wrote: >> On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson wrote: > >>> enum BigInt test1 = BigInt(123); >>> enum BigInt test2 = plusTwo(test1); >>> >>> public static BigInt plusTwo(in bigint n) >> >> Should be plusTwo(in BigInt n) instead. >> > > Yes, I had aliased BigInt to bigint. > > And I checked and it compiles for me too with Windows m64. That makes it seem more like a bug than a feature. > > I'll open a bug report. > > Paul The point here is that x86 uses an assembler-optimized implementation (std.internal.math.biguintx86) and every other cpu architecture (including x64) uses a D version (std.internal.math.biguintnoasm). Because of the inline assembler, the x86 version is not CTFE-enabled. Regards, Kai | |||
July 07, 2015 Re: bigint compile time errors | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Kai Nacke | On Sunday, 5 July 2015 at 20:35:03 UTC, Kai Nacke wrote:
> On Friday, 3 July 2015 at 04:08:32 UTC, Paul D Anderson wrote:
>> On Friday, 3 July 2015 at 03:57:57 UTC, Anon wrote:
>>> On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson wrote:
>>
>>>> [...]
>>>
>>> Should be plusTwo(in BigInt n) instead.
>>>
>>
>> Yes, I had aliased BigInt to bigint.
>>
>> And I checked and it compiles for me too with Windows m64. That makes it seem more like a bug than a feature.
>>
>> I'll open a bug report.
>>
>> Paul
>
> The point here is that x86 uses an assembler-optimized implementation (std.internal.math.biguintx86) and every other cpu architecture (including x64) uses a D version (std.internal.math.biguintnoasm). Because of the inline assembler, the x86 version is not CTFE-enabled.
>
> Regards,
> Kai
Could we add a version or some other flag that would allow the use of .biguintnoasm with the x86?
Paul
| |||
July 10, 2015 Re: bigint compile time errors | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Paul D Anderson | On Tuesday, 7 July 2015 at 22:19:22 UTC, Paul D Anderson wrote:
> On Sunday, 5 July 2015 at 20:35:03 UTC, Kai Nacke wrote:
>> On Friday, 3 July 2015 at 04:08:32 UTC, Paul D Anderson wrote:
>>> On Friday, 3 July 2015 at 03:57:57 UTC, Anon wrote:
>>>> On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson wrote:
>>>
>>>>> [...]
>>>>
>>>> Should be plusTwo(in BigInt n) instead.
>>>>
>>>
>>> Yes, I had aliased BigInt to bigint.
>>>
>>> And I checked and it compiles for me too with Windows m64. That makes it seem more like a bug than a feature.
>>>
>>> I'll open a bug report.
>>>
>>> Paul
>>
>> The point here is that x86 uses an assembler-optimized implementation (std.internal.math.biguintx86) and every other cpu architecture (including x64) uses a D version (std.internal.math.biguintnoasm). Because of the inline assembler, the x86 version is not CTFE-enabled.
>>
>> Regards,
>> Kai
>
> Could we add a version or some other flag that would allow the use of .biguintnoasm with the x86?
>
> Paul
biguintx86 could import biguintnoasm. Every function would need to check for CTFE and if yes then call the noasm function. Should work but requires some effort.
Regards,
Kai
| |||
Copyright © 1999-2021 by the D Language Foundation
Permalink
Reply