On Thursday, 10 November 2022 at 23:15:24 UTC, H. S. Teoh wrote:

> According to the Wikipedia page on multinomial distribution (linked by

Timon), it states that the variance of X_i for n rolls of a k-sided dice

(with probability p_i), where i is a specific outcome, is:

Var(X_i) = n*p_i*(1 - p_i)

Don't really understand where this formula came from (as I said, that page is way above my head), but we can make use of it.

This is where things take a wrong turn. In reality you need more than just a matching mean and variance to correctly simulate some arbitrary probability distribution: https://en.wikipedia.org/wiki/Moment_(mathematics)

Every n-th moment needs to be correct too. Some of these moments have special names (n=1 mean, n=2 variance, n=3 skewness, n=4 kurtosis, ...). If you only take care of the mean and variance for simulating a random distribution, then it's somewhat similar to approximating "sin(x) = x - (x^3 / 3!)" via taking only the first few terms of the Taylor series.

I wonder what's the reason for not using the mir-random library like suggested in the early comments? Do you want to avoid having an extra dependency?

```
/+dub.sdl:
dependency "mir-random" version="~>2.2.19"
+/
import std, mir.random.engine, mir.random.ndvariable;
uint[k] diceDistrib(uint k)(uint N)
in(k > 0)
in(N > 0)
out(r; r[].sum == N)
{
uint[k] result;
double[k] p;
p[] = 1.0 / k;
auto rv = multinomialVar(N, p);
rv(rne, result[]);
return result;
}
void main()
{
writeln(diceDistrib!6(100_000_000));
}
```