April 18, 2021

On Wednesday, 18 September 2019 at 18:42:32 UTC, Joseph Rushton Wakeling wrote:

>

On Saturday, 4 May 2019 at 19:08:10 UTC, Murilo wrote:

>

BigInt takes too long. I need something as fast as the primitive types.

What hardware are you planning on running your programs on? I'm not sure how good a speed you can guarantee without native hardware support.

Hello together.
You can also run D code with 64-bit integers on 32-Bit architectures, so where should be problems to have 128-bit on 32-Bit architectures or particularly 64-bit architecture?

In 32-Bit ARM you would do 128-Bit arithmetics like this:

c = a + b  // r0..r3 = a, r4..r7 = b, r8..r11 = c

ADD r8, r0, r4
ADC r9, r1, r5
ADC r10, r2, r6
ADC r11, r3, r7

c = a - b

SUB r8, r0, r4
SBC r9, r1, r5
SBC r10, r2, r6
SBC r11, r3, r7
//this would work for any multiple of 32-Bit arithmetics and even for any
//multiple of 8-Bit arithmetics since ARM has zero/sign extend instructions for 8-bit

//negation, logical operations, shifts and rotations are similarly easy to implement for 128-bit

c = a * b   // result depends on the type of a, b, c actually but assume ucent

UMULL r9, r8, r0, r4  @ c[0..64] = a[0..32] * b[0..32]
UMULL r11, r10, r0, r6  @ c[64..128] = a[0..32] * b[64..96]
UMLAL r11, r10, r1, r5  @ c[64..128] += a[32..64] * b[32..64]
UMLAL r11, r10, r2, r4  @ c[64..128] += a[64..96] * b[0..32]
MLA r11, r0, r7 @ c[96..128] += a[0..32] * b[96..128]
MLA r11, r1, r6 @ c[96..128] += a[32..64] * b[64..96]
MLA r11, r2, r5 @ c[96..128] += a[64..96] * b[32..64]
MLA r11, r0, r7 @ c[96..128] += a[96..128] * b[0..32]
UMLAL r10, r9, r0, r5  // c[32..96] += a[0..32] * b[32..64]
ADC r11, r11, #0  // consider overflow
UMLAL r10, r9, r1, r4  // c[32..96] += a[32..64] * b[0..32]
ADC r11, r11, #0  // consider overflow

You maybe can even reduce the multiplication code by one instruction with a smarter solution. Well, I almost suggested using godbolt.org to check, what Assembly code is generated by C# when using 128-Bit but I see that C# is not supported there (but D is :-) ).

Only the division is slightly more complicated. You'd probably inverse the divisor and multiply it to the divident c = a * (1/b) (and only calculating the upper 128-bit of the 256-bit product).

If b is 32-Bit then

2^n * 1/b = floor(0xFF..FF/b) + (0xFF..FF % b +1) * 1/b
<=>  //using x = floor(0xFF..FF/b), y = 0xFF..FF % b + 1
q = a/b = a/2^n * (x + y/2^n * (x + y/2^n * ( ... )))

//for n = 32 (step size) calculate UQ128 q as follows
q = x << 96
q += y * x << 64
q += y² * x << 32
q += y³ * x + (y⁴ * x >> 32) + (y⁵ * x >> 64) ... //until nothing changes
q = (q * a)[128..256]  //getting the upper 128-bit of 256-bit result

1/b = x * (1 + y/2^n + (y/2^n)² + (y/2^n)³ + ...)
  = x * ((y/2^n)^-m - 1) / ((y/2^n)^-1 - 1)

//q = r0..r3
UDIV(x, 0xFF..FF, divisor)  //x = floor(0xFF..FF / b)
MLS(y, x, divisor, x)  //= y = x - x * divisor = 0xFF..FF % b
ADD(y, y, 1)
...  //multiplications and additions

This algorithm is simple but has Worst-case execution time O(n) where n is the bit length which are a lot of multiplications. The result of 1/b is not perfectly accurate since it divides 0.FF..FF as divident and not 1.0 and the bigger y, the slower is this algorithm. But as I look at the worst case, I notice optimization potential:

q = x * ((1 << 32) + y) * ((1 << 64) + (y *= y) )  // 32-bit x 32-bit x 64-bit
q += (q * (y *= y) ) >> 128  // upper 128-bit of 128-bit x 128-bit
q += (q * (y *= y) ) >> 256  // upper 128-bit of 128-bit x 256-bit
...  //another 3 times in the worst case
// the algorithm can stop in between, if y >> 2^n has become = 0
q = (q * a)[128..256]

I basically (re)found the Goldschmidt Division. The individual multiplications of x and y can be optimized because register parts of x and y can be entirely zero so that multiplications need only 32x32 bit multiplication in the best case.

For divisors of more than 32-bit, one can try to enlarge the rest of the division.

b'   // highest non-zero word in the 128-bit integer b
x = floor(0xFF..FF / b') << ((3 - n)*32)  // n is the Least-significant-first index of word b' in b
y = 0xFF..FF % b' + 1  // y now can be up to 127 bit large in the first step!

q = x   // x is ucent, three words of it are zero
q += (q * y) >> 128  // upper 128 bit of 128x128 multiplication
q += (q * (y *= y)) >> 256  // upper 128 bit of 128x256 multiplication
...

The only difference is that the first value of y can become quite large already and x is already 128-bit where 3 words of it are zero.

A division of 128-bit by a constant is very easy, just a constant multiplication of the precalculated inverse and taking the upper 128-bits of the result.

The only reason I see for not implementing it is low priority and calculation with reals is sufficient in most cases and faster (at least the register pressure will go down significantly). The only disadvantage of reals is the limited precision but except for extremely high precision applications and cryptographic-related things, I don't know a need for 128-bit (at least there is only few performance gain of using 128-bit for parallel operations). Oftentimes you can replace 128-bit arithmetics in cryptographic and multimedia with SIMD instructions which are used for parallel arithmetics in processors.

We actually need to make us aware of what 128-bit actually means! You can store any timestamp you likely ever would need already in a 80-Bit integer (about the order of Femtoseconds in a year if I remember right, so just 90-Bit would give you Femtoseconds in one millenium). 128-bit numbers can store integers up to 300*10¹² !! This is said by physicists to be far more than the number of atoms in our universe, so most of the values which can be stored in 128-Bit are not even numbers anymore in the original sense.

Probably they are waiting for 128-bit architectures but hm...
You can translate the upper code into a template for your ASM language (like AMD64) and there you go.

April 27, 2021

On Sunday, 18 April 2021 at 23:31:12 UTC, Elmar wrote:

>

On Wednesday, 18 September 2019 at 18:42:32 UTC, Joseph Rushton Wakeling wrote:

>

On Saturday, 4 May 2019 at 19:08:10 UTC, Murilo wrote:

>

BigInt takes too long. I need something as fast as the primitive types.

What hardware are you planning on running your programs on? I'm not sure how good a speed you can guarantee without native hardware support.

Hello together.
...

I'm sorry for double post, have to correct something but don't know how to edit posts.

I was wrong with the number of atoms in the universe. 2^128 is close to 10^(128/3) < 10^43 which is about the root of the assumed estimation of the number of atoms in universe. But 128-bit numbers are still extremely large as a count value, exceeding any common-day numbers, common numbers in electronics or software. You rarely need 128-bit numbers as single count values (so in practice they are used for SIMD where medium-accurate sensors deliver 16-bit sensor samples), probably you could fit any counting value that practically can be found in our universe into a 256-bit number.

And for the division: the idea is based on the finding that you can calculate the division with arbitrary precision via a recursive formula, using floor and the remainder. Applying the recursion until no result bit (in the limited precision) changes anymore has a bad performance in the worst case (using one big multiplication per result bit).

But looking at the pattern behind the multiplication (recursion), needed to calculate the inverse, it's easy to see, that the multiplicative inverse just is 1/b = x * 0.111111..._(2^n/y) = 0.1111..._(b+1) where 0.1111... (equal to 1 / (r - 1) in any number system to the base r > 1) is a periodic number (or polynomial) with the radix 2^n/y (very likely a rational number), x = floor(2^n / b) and y = 2^n % b. Numbers are just used as convenient representation of polynomials.

While my first shown division approach calculates this specific number with one digit per cycle, the efficient approach afterwards doubles the number of calculated 1-digits for each cycle. Anyone who can calculate more digits per cycle with same complexity would become famous.

April 28, 2021

On Tuesday, 27 April 2021 at 18:00:09 UTC, Elmar wrote:

>

128-bit numbers are still extremely large as a count value, exceeding any common-day numbers, common numbers in electronics or software. You rarely need 128-bit numbers as single count values

Yes, if you need ucent/cent you more likely only need the type but only a small subset of the operations allowed on integral types. For example for a wide bitfield, only & | ^ << >> ~ =.

The subset required is hypothetically always small enough and people just write their own struct, alias this and a few opover.

April 29, 2021

On Wednesday, 28 April 2021 at 09:21:54 UTC, user1234 wrote:

>

On Tuesday, 27 April 2021 at 18:00:09 UTC, Elmar wrote:

>

128-bit numbers are still extremely large as a count value, exceeding any common-day numbers, common numbers in electronics or software. You rarely need 128-bit numbers as single count values

Yes, if you need ucent/cent you more likely only need the type but only a small subset of the operations allowed on integral types. For example for a wide bitfield, only & | ^ << >> ~ =.

The subset required is hypothetically always small enough and people just write their own struct, alias this and a few opover.

We just wrote an Int128 implementation which uses GCC or LDC's __int128_t built in data type.

Pros: It's fast
Cons: It does not work at compile time.

If there's interest, I can share it once we complete writing it. Currently it's inside an internal library, but it can be extracted out.

That said, a D implementation of cent and ucent would be really good to have.

Saurabh

April 29, 2021

On Sunday, 18 April 2021 at 23:31:12 UTC, Elmar wrote:

>

The only reason I see for not implementing it is low priority and calculation with reals is sufficient in most cases and faster (at least the register pressure will go down significantly). The only disadvantage of reals is the limited precision but except for extremely high precision applications and cryptographic-related things, I don't know a need for 128-bit (at least there is only few performance gain of using 128-bit for parallel operations). Oftentimes you can replace 128-bit arithmetics in cryptographic and multimedia with SIMD instructions which are used for parallel arithmetics in processors.

One big use case for 128-bit integers is working with cryptocurrencies. These do not fit into long/ulong if any multiplication is performed. D is very well suited to capturing this niche. A D implementation of cent/ucent would make it much more approachable.

May 03, 2021

On Thursday, 29 April 2021 at 04:38:52 UTC, Saurabh Das wrote:

>

On Wednesday, 28 April 2021 at 09:21:54 UTC, user1234 wrote:

>

On Tuesday, 27 April 2021 at 18:00:09 UTC, Elmar wrote:

>

[...]

Yes, if you need ucent/cent you more likely only need the type but only a small subset of the operations allowed on integral types. For example for a wide bitfield, only & | ^ << >> ~ =.

The subset required is hypothetically always small enough and people just write their own struct, alias this and a few opover.

We just wrote an Int128 implementation which uses GCC or LDC's __int128_t built in data type.

Pros: It's fast
Cons: It does not work at compile time.

Good point. The 128 bits int type I use actually requires to have static immutable... static shared this() could be used however to set the pseudo enums of this type.

>

If there's interest, I can share it once we complete writing it. Currently it's inside an internal library, but it can be extracted out.

yeah just publish the DUB package.

>

That said, a D implementation of cent and ucent would be really good to have.

Saurabh

December 24, 2021
On 5/3/2019 9:28 PM, Murilo wrote:
> Hi guys, I am writing a program what will use very big numbers, when will you guys implement cent and ucent?

If someone wants to pick up the flag for this, the way to do it is:

    struct Cent
    {
	long lsl, msl;
    }

followed by a function for each operator:

    Cent add(Cent op1, Cent op2) { ... }
    ...

The compiler can then access these as builtin functions to implement cent in the compiler.

Who wants to do it?
December 25, 2021
On Friday, 24 December 2021 at 23:52:44 UTC, Walter Bright wrote:
> On 5/3/2019 9:28 PM, Murilo wrote:
>> Hi guys, I am writing a program what will use very big numbers, when will you guys implement cent and ucent?
>
> If someone wants to pick up the flag for this, the way to do it is:
>
>     struct Cent
>     {
> 	long lsl, msl;
>     }
>
> followed by a function for each operator:
>
>     Cent add(Cent op1, Cent op2) { ... }
>     ...
>
> The compiler can then access these as builtin functions to implement cent in the compiler.
>
> Who wants to do it?

I didn't know what ``cent`` was, then someone in the IRC mentioned ``big int``

I think that is one of the baggage from C that it not worth keeping for the long term

If were to add a new type, why not start fresh and put some consistency, i propose doing what C++/Rust/Swift/Zig and other language did, prefix with integer/unsigned following by the size

So we know exactly what type we are talking about, on top of its size!

I personally think this is worth pursuing for the long term, some consistency, with some identity!

We could start with just some basic aliases in ``object.d``, then eventually make them officially supported so  targets like -betterC could have them, of maybe directly supporting them is the play!

What do you guys think?

December 24, 2021
On 12/24/2021 4:35 PM, ryuukk_ wrote:
> I didn't know what ``cent`` was, then someone in the IRC mentioned ``big int``

`cent` is a 128 bit int. Cent for "100".

> I think that is one of the baggage from C that it not worth keeping for the long term

It doesn't come from C.


> If were to add a new type, why not start fresh and put some consistency, i propose doing what C++/Rust/Swift/Zig and other language did, prefix with integer/unsigned following by the size

The size postfix is fine for the first week learning the language, then it becomes annoying:

1. visual noise
2. takes many words to speak it
3. touch-typing doesn't work well with it
4. takes longer to type it

The size postfixes came about (I suspect) because C sizes are indeterminate. They are determinate in D, a very different situation.

D is consistent in that all unsigned integers have a `u` prefix. Much more concise than `unsigned `.
December 25, 2021

On Saturday, 25 December 2021 at 04:17:11 UTC, Walter Bright wrote:

>

prefix. Much more concise than unsigned .

C++'s ... is much more concise than iterate arrays.