June 17, 2021
On Thursday, 17 June 2021 at 05:48:57 UTC, Walter Bright wrote:
> On 6/16/2021 2:38 PM, max haughton wrote:
>> Perhaps we can start by ditching the schedulers for Pentium chips from the 90s - even charitably (i.e. The P6 architecture did live for a while) it's obsolete (and more importantly not needed for supporting those targets). That way we gain experience pressing delete rather than making the code dead and leaving it, and we reduce the surface area of old/dead code which could be silently broken if other things change around it.
>
> At one point, Intel did release a low power 32 bit chip for embedded systems that benefited quite a bit from the Pentium scheduler, as that chip had sacrificed its own internal scheduler.
>
> Besides, the bugs have been sorted out from that scheduler long ago. It's not impairing anyone.

Compile times?

OT: I've been reading chunks of the GCC instruction schedulers recently, and I can report that they'd be much more readable (and safer, obviously) in D. The actual approach taken for the OoO monster-cpus we have now (New Apple chips have *16* execution units) isn't totally dissimilar to the code in dmd, just more general (It's effectively an in-order scheduler specifically aimed at the decoder, but the state machine can be generated from the machine description files rather than ad-hoc in code)
June 17, 2021
On Wednesday, 16 June 2021 at 17:59:06 UTC, Andrei Alexandrescu wrote:
> By numerous reports the world is ditching 32 bit for good:
>
> https://www.androidauthority.com/arm-32-vs-64-bit-explained-1232065/
>
> There's significant effort needed in the dmd source code to accommodate 32 bit builds, which nobody should use.
>
> Ditch?

I would not ditch.

32-bit is still useful on embedded devices. Also, some applications do not necessarily need to address 4GB of memory.

I read WebAssembly FAQ, and it says interesting stuff on the subject by the way. See section "Why have wasm32 and wasm64, instead of just an abstract size_t?" and also "Why have wasm32 and wasm64, instead of just using 8 bytes for storing pointers?".

https://webassembly.org/docs/faq/#will-webassembly-support-view-source-on-the-web
June 17, 2021
On 6/16/2021 11:33 PM, max haughton wrote:
> Compile times?

Perhaps, but it only runs on optimized builds.

> OT: I've been reading chunks of the GCC instruction schedulers recently, and I can report that they'd be much more readable (and safer, obviously) in D. The actual approach taken for the OoO monster-cpus we have now (New Apple chips have *16* execution units) isn't totally dissimilar to the code in dmd, just more general (It's effectively an in-order scheduler specifically aimed at the decoder, but the state machine can be generated from the machine description files rather than ad-hoc in code)

The DMD scheduler is mostly table driven.

The tedium with it is creating the tables. It's not fun, mainly because there are *so many* instructions, and the probability of error.

Did the gcc compiler guys write the machine description files, too? I suppose that's probably an easier way, as one could make a specialized language for it that is convenient. After all, for a compiler guy, writing a specialized table language is a piece of cake :-)

BTW, many of the old handmade tables in the DMD backend were replaced with CTFE-generated tables. A really nice improvement.
June 17, 2021
On Wednesday, 16 June 2021 at 17:59:06 UTC, Andrei Alexandrescu wrote:
> By numerous reports the world is ditching 32 bit for good:
>
> https://www.androidauthority.com/arm-32-vs-64-bit-explained-1232065/
>
> There's significant effort needed in the dmd source code to accommodate 32 bit builds, which nobody should use.
>
> Ditch?

Could this wait a few years instead?

Between 10% and 20% of our userbase are still in need of 32-bit software, for network effect reasons. Of course, if the D core team want to forego one architecture, we'll just use an older compiler for this target.
June 17, 2021
On Thursday, 17 June 2021 at 10:44:36 UTC, Guillaume Piolat wrote:
> Between 10% and 20% of our userbase are still in need of 32-bit software, for network effect reasons. Of course, if the D core team want to forego one architecture, we'll just use an older compiler for this target.

I think this is only about the dmd backend?

Surely ldc/gdc must support 32 bits "forever" in order to support embedded CPUs?


June 17, 2021
On Wednesday, 16 June 2021 at 17:59:06 UTC, Andrei Alexandrescu wrote:
> By numerous reports the world is ditching 32 bit for good:
>
> https://www.androidauthority.com/arm-32-vs-64-bit-explained-1232065/
>
> There's significant effort needed in the dmd source code to accommodate 32 bit builds, which nobody should use.
>
> Ditch?

I would not ditch.

32-bit is still useful on embedded devices. Also, some applications do not necessarily need to address 4GB of memory.

I read WebAssembly FAQ, and it says interesting stuff on the subject by the way. See section "Why have wasm32 and wasm64, instead of just an abstract size_t?" and also "Why have wasm32 and wasm64, instead of just using 8 bytes for storing pointers?".

https://webassembly.org/docs/faq/#will-webassembly-support-view-source-on-the-web
June 17, 2021
On Thursday, 17 June 2021 at 11:47:08 UTC, Claude wrote:
>
> I would not ditch.
>
> 32-bit is still useful on embedded devices. Also, some applications do not necessarily need to address 4GB of memory.

Yes, there might be 32-bit x86 systems and just because of this I say we should ditch dmd. I knew this were coming, that the dmd backend would be gradually abandoned because of lack of resources. Also, GDC/LDC offers a range of x86 micro architectures to optimize for which is out of scope for dmd. This is particularly interesting for embedded systems that often use odd x86 CPUs. This will eventually happen to x86-64 as well and I'm starting the timer.

Instead D should take advantage of CPU support that comes with GDC/LDC and the infrastructure around them.
1 2
Next ›   Last »