6 days ago
On 15/01/2025 11:16 AM, Walter Bright wrote:
> On 1/14/2025 6:34 AM, Richard (Rikki) Andrew Cattermole wrote:
>> Also did you see my email from the day after the meeting?
> 
> No. What was the subject title?

"DFA - a potential strategy?"

Oh and expect Bruce to want to talk about it next local meetup!

6 days ago
On 1/14/2025 12:33 AM, ryuukk_ wrote:
> What the young competition doing?

I implemented halffloat a decade ago. It didn't go into the library then because there was zero interest in it.

6 days ago
On 1/14/2025 2:19 PM, Richard (Rikki) Andrew Cattermole wrote:
> "DFA - a potential strategy?"
> 
> Oh and expect Bruce to want to talk about it next local meetup!
> 

Ok, I found it.
5 days ago
On Tuesday, 14 January 2025 at 22:14:01 UTC, Walter Bright wrote:
> The interesting thing is not the existence of the types. It's how they are implemented. The X86_64 architecture does not support a 16 bit floating point type.

It had some limited support starting from 2018 had AVX512-FP16.
And some newer models of x86 support both FP16 and BF16.

* https://networkbuilders.intel.com/docs/networkbuilders/intel-avx-512-fp16-instruction-set-for-intel-xeon-processor-based-products-technology-guide-1651874188.pdf

* https://stackoverflow.com/questions/49995594/half-precision-floating-point-arithmetic-on-intel-chips

Also it is important for AI/ML models, which can run on ARM CPUs, GPUs and TPUs

5 days ago

On Wednesday, 15 January 2025 at 06:42:45 UTC, Sergey wrote:

>

On Tuesday, 14 January 2025 at 22:14:01 UTC, Walter Bright wrote:

>

The interesting thing is not the existence of the types. It's how they are implemented. The X86_64 architecture does not support a 16 bit floating point type.

It had some limited support starting from 2018 had AVX512-FP16.
And some newer models of x86 support both FP16 and BF16.

Also it is important for AI/ML models, which can run on ARM CPUs, GPUs and TPUs

ARM architecture is also evolving (8.2A, 9) to better support data parallelism with both FP16 and BF16 as well as SVE2 but the uptake by manufacturers is uneven. Some may be betting that CPU data parallelism is a dead end, that NEON is more than enough.

In the GPU world (dcompute) mini/micro floats are very important for a variety of workloads, AI/ML chief among them.

5 days ago
I didn't know that. Thanks for the info.
5 days ago
On Tuesday, 14 January 2025 at 06:13:04 UTC, Walter Bright wrote:
> _Float16 is new in C23 Appendix H.11-6
>
> https://github.com/dlang/dmd/issues/20516
>
> For the moment, I submitted a PR with a workaround:
>
> https://github.com/dlang/dmd/pull/20699
>
> Amazingly, some years ago I implemented 16 bit floats in D:
>
> https://github.com/DigitalMars/sargon/blob/master/src/sargon/halffloat.d
>
> Would anyone like to put halffloat.d into Druntime and make it our implementation of _Float16?

that is the obvious way for dmd.
1 2
Next ›   Last »