January 13

On Thursday, 13 January 2022 at 16:10:39 UTC, sfp wrote:

>

My personal view is that people in science are generally more interested in actually doing science than in playing around with programming trivia.

Yes, this is probably true. My impression is that the physics department tend to be in favour of Python, C++ and I guess Fortran. In signal processing Matlab with Python as an upcoming alternative.

Maybe GPU-compute support is more relevant for desktop application development than scientific computing, in the context of D.

January 13

On Thursday, 13 January 2022 at 16:31:11 UTC, Tejas wrote:

>

On Thursday, 13 January 2022 at 14:24:59 UTC, Bruce Carneal wrote:

>

Yes. The language independent work in LLVM in the accelerator area is hugely important for dcompute, essential.

Sorry if this sounds ignorant, but does SPIR-V count for nothing?

SPIR-V is very useful. It is the catalyst and focal point of some of the most important ongoing LLVM accelerator work. Nicholas and I both believe that that work could provide a much more robust intermediate target for dcompute once it hits release status.

> >

Gotta surf that wave as we don't have the manpower to go independent. I dont think anybody has that amount of manpower, hence the collaboration/consolidation around LLVM as a back-end for accelerators.

>

There was a time to try overthrow C++, that was 10 years ago, LLVM was hardly relevant and GPGPU computing still wasn't mainstream.

Yes. The "overthrow" of C++ should be a non-goal, IMO, starting yesterday.

Overthrowing may be hopeless, but I feel we should at least be a really competitive with them.

Sure. We need to offer something that is actually better, we just don't need to be perceived as better by everyone in all scenarios. An example: if management is deathly afraid of anything but microscopic incremental development or, more charitably, management weighs the risks of new development very very heavily, then D is unlikely to be given a chance.

>

Because it doesn't matter whether we're competing with C++ or not, people will compare us with it since that's the other choice when people will want to write extremely performant GPU code(if they care about ease of setup and productivity and not performance-at-any-cost, Julia and Python have beat us to it :-(
)

Yes. We should evaluate our efforts by comparing (competing) with alternatives where available. D/dcompute is already, for my GPU work at least, much better than CUDA/C++. Concretely: I can achieve equivalent or higher performance more quickly with more readable code than I could formerly with CUDA/C++. There are some things that are trivial in D kernels (like live-in-register/mem-bandwidth-minimized stencil processing) that would require "heroic" effort in CUDA/C++.

That said, there are definitely things that we could improve in the dcompute/accelerator area, particularly wrt the on-ramp for those new to accelerator programming. But, as you note, D is unlikely to be adopted by the "performance is good enough with existing solutions" crowd in any case. That's fine.

January 13

On Thursday, 13 January 2022 at 18:41:54 UTC, Bruce Carneal wrote:

>

Yes. We should evaluate our efforts by comparing (competing) with alternatives where available. D/dcompute is already, for my GPU work at least, much better than CUDA/C++. Concretely: I can achieve equivalent or higher performance more quickly with more readable code than I could formerly with CUDA/C++. There are some things that are trivial in D kernels (like live-in-register/mem-bandwidth-minimized stencil processing) that would require "heroic" effort in CUDA/C++.

Does anyone else know anything about this? Burying it deep in a mailing list post isn't exactly the best way to publicize it. Ironically, I might add, in a discussion about lack of uptake.

January 13
On Thursday, 13 January 2022 at 01:19:07 UTC, H. S. Teoh wrote:
>
> ..... But still, it doesn't have to be as complex as languages like C++ make it seem.  In the above example I literally just added ".parallel" to the code and it Just Worked(tm).
>
>
> T

I wish below would "just work"


// ----
module test;

import std;

@safe
void main()
{
    //int[5] arr = [1, 2, 3, 4, 5]; // nope. won't work with .parallel
    int[] arr = [1, 2, 3, 4, 5];// has to by dynamic to work with .parallel ??

    int x = 0;

    foreach(n; arr.parallel) // Nope - .parallel is a @system function and cannot be called in @safe
    {
        x += n;
    }

    writeln(x);
}
// -----
January 13

On Thursday, 13 January 2022 at 19:35:28 UTC, bachmeier wrote:

>

On Thursday, 13 January 2022 at 18:41:54 UTC, Bruce Carneal wrote:

>

Yes. We should evaluate our efforts by comparing (competing) with alternatives where available. D/dcompute is already, for my GPU work at least, much better than CUDA/C++. Concretely: I can achieve equivalent or higher performance more quickly with more readable code than I could formerly with CUDA/C++. There are some things that are trivial in D kernels (like live-in-register/mem-bandwidth-minimized stencil processing) that would require "heroic" effort in CUDA/C++.

Does anyone else know anything about this? Burying it deep in a mailing list post isn't exactly the best way to publicize it. Ironically, I might add, in a discussion about lack of uptake.

I know, right? Ridiculously big opportunity/effort ratio for dlang and near zero awareness...

I usually talk a bit about dcompute at the beerconfs but to date I've only corresponded on the topic with Nicholas, Ethan, and Max (a little).

Ethan might have a sufficiently compelling economic case for promoting dcompute to his company in the relatively near future. Nicholas recently addressed their need for access to the texture hardware and fitting within their work flow, but there may be other requirements... An adoption by a world class game studio would, of course, be very good news but I think Ethan is slammed (perpetually, and in a mostly good way, I think) so it might be a while.

Before promoting dcompute broadly I believe we should work through the installation/build/deployment procedures and some examples for the "new to accelerators" crowd. It's no big deal as it sits for old hands but first impressions are important and even veteran programmers will appreciate an "it just works" on ramp.

If you're interested I suggest we continue the conversation on dcompute at the next beerconf where we can plot its path to world domination... :-)

January 13
On Thu, Jan 13, 2022 at 08:07:51PM +0000, forkit via Digitalmars-d wrote: [...]
> // ----
> module test;
> 
> import std;
> 
> @safe
> void main()
> {
>     //int[5] arr = [1, 2, 3, 4, 5]; // nope. won't work with .parallel
>     int[] arr = [1, 2, 3, 4, 5];// has to by dynamic to work with .parallel
> ??

Just write instead:

	int[5] arr = [1, 2, 3, 4, 5];
	foreach (n; arr[].parallel) ...

In general, whenever something rejects static arrays, inserting `[]` usually fixes it. :-D

I'm not 100% sure why .parallel is @system, but I suspect it's because of potential issues with race conditions, since it does not prevent you from writing to the same local variable from multiple threads. If pointers are updated this way, it could lead to memory corruption problems.


T

-- 
Long, long ago, the ancient Chinese invented a device that lets them see through walls. It was called the "window".
January 13

On Thursday, 13 January 2022 at 20:38:19 UTC, Bruce Carneal wrote:

>

I know, right? Ridiculously big opportunity/effort ratio for dlang and near zero awareness...

If dcompute is here to stay, why not put it in the official documentation for D as an "optional" part of the spec?

I honestly assumed that it was unsupported and close to dead as I had not heard much about it for a long time.

January 13
On Thursday, 13 January 2022 at 20:58:25 UTC, H. S. Teoh wrote:
> [snip]
>
> I'm not 100% sure why .parallel is @system, but I suspect it's because of potential issues with race conditions, since it does not prevent you from writing to the same local variable from multiple threads. If pointers are updated this way, it could lead to memory corruption problems.
>
>
> T

Could it be made @safe when used with const/immutable variables?
January 13

On Thursday, 13 January 2022 at 20:07:51 UTC, forkit wrote:

>

On Thursday, 13 January 2022 at 01:19:07 UTC, H. S. Teoh wrote:

>

..... But still, it doesn't have to be as complex as languages like C++ make it seem. In the above example I literally just added ".parallel" to the code and it Just Worked(tm).

T

I wish below would "just work"

// ----
module test;

import std;

@safe
void main()
{
//int[5] arr = [1, 2, 3, 4, 5]; // nope. won't work with .parallel
int[] arr = [1, 2, 3, 4, 5];// has to by dynamic to work with .parallel ??

int x = 0;

foreach(n; arr.parallel) // Nope - .parallel is a @system function and cannot be called in @safe
{
    x += n;
}

writeln(x);

}
// -----

import core.atomic : atomicOp;
import std.parallelism : parallel;
import std.stdio : writeln;

    // Not @safe, since `parallel` still allows access to non-shared-qualified
    // data. See:
    // https://github.com/dlang/phobos/blob/v2.098.1/std/parallelism.d#L32-L34
    void main()
    {
        int[5] arr = [1, 2, 3, 4, 5]; // Yes, static arrays work just fine.

        // `shared` is necessary to safely access data from multiple threads
        shared int x = 0;

        // Most functions in Phobos work with ranges, not containers (by design).
        // To get a range from a static array, simply slice it:
        foreach(n; arr[].parallel)
        {
            // Use atomic ops (or higher-level synchronization primitives) to work
            // with shared data, without data-races:
            x.atomicOp!`+=`(n);
        }

        writeln(x);
    }
    ```

January 13

On Thursday, 13 January 2022 at 21:06:45 UTC, Ola Fosheim Grøstad wrote:

>

On Thursday, 13 January 2022 at 20:38:19 UTC, Bruce Carneal wrote:

>

I know, right? Ridiculously big opportunity/effort ratio for dlang and near zero awareness...

If dcompute is here to stay, why not put it in the official documentation for D as an "optional" part of the spec?

There are two reasons that I have not promoted dcompute to the general community up to now:

  1. Any resultant increase in support load would fall on one volunteer (that is not me) and

  2. IMO, a better on-ramp, particularly for those new to accelerators, is needed: additional examples, docs, and "it just works" install/build/deploy vetting would go a long way to reducing the support load and increasing happy uptake. Additionally, Nicholas has a list of "TODOs" that probably should be worked through before additional promotion occurs. None of them impact my work but they might hit others.

Nicholas opinion on the matter is much more important than mine as he already has a non-D "day job" and would bear the brunt of a, possibly premature, promotion of dcompute.

1 2 3 4 5 6 7 8 9 10 11