January 18, 2014 Re: GPGPUs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russel Winder | On Sunday, 18 August 2013 at 18:35:45 UTC, Russel Winder wrote:
>> https://github.com/Trass3r/cl4d
>
> I had missed that as well. Bad Google and GitHub skills on my part clearly.
>
> I think the path is now obvious, ask if the owner will turn this
> repository over to a group so that it can become the focus of future work via the repositories wiki and issue tracker.
>
> I will fork this repository as is and begin to analyse the status quo wrt the discussion recently on the email list.
Interesting. I discovered this thread via the just introduced traffic analytics over at Github ^^
Haven't touched the code for a long time but there have been some active forks.
I've been thinking to offer push rights to them for quite a while but never got around to it.
|
January 18, 2014 Re: GPGPUs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russel Winder | On Friday, 16 August 2013 at 10:04:22 UTC, Russel Winder wrote: > So the question I am interested in is whether D is the language that can > allow me to express in a single codebase a program in which parts will > be executed on one or more GPGPUs and parts on multiple CPUs. D has > support for the latter, std.parallelism and std.concurrency. You can write everything in OpenCL and dispatch to both a CPU or GPU device, managing the submit queues yourself. > I guess my question is whether people are interested in std.gpgpu (or some more sane name). What would be the purpose? To be on top of both CUDA and OpenCL? |
January 22, 2014 Re: GPGPUs | ||||
---|---|---|---|---|
| ||||
Posted in reply to ponce | On Saturday, 18 January 2014 at 19:34:49 UTC, ponce wrote: > You can write everything in OpenCL and dispatch to both a CPU or GPU device, managing the submit queues yourself. > >> I guess my question is whether people are interested in std.gpgpu (or some more sane name). > > What would be the purpose? To be on top of both CUDA and OpenCL? Compiler support with futures could be useful, e.g. write D futures and let the compiler generate CUDA and OpenCL, while having a fall-back branch for the CPU in case the GPU is unavailable/slow. e.g.: GPUStore store; store[123]=somearray; store[53]=someotherarray; FutureCalcX futurecalcx = new....(store)... futurecalcx.compute(store(123),store(53),1.34,299) ... if(futurecalc.ready){ y = futurecalc.result } or future with callback… futurecalcx.thenCall(somecallback) futurecalcx.compute(....) |
January 22, 2014 Re: GPGPUs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | Ola Fosheim Grøstad: > Compiler support with futures could be useful, e.g. write D futures and let the compiler generate CUDA and OpenCL, while having a fall-back branch for the CPU in case the GPU is > unavailable/slow. Could be of interest, to ease the porting of C++ code to Cuda: http://www.alexstjohn.com/WP/2014/01/16/porting-cuda-6-0/ Buye, bearophile |
January 22, 2014 Re: GPGPUs | ||||
---|---|---|---|---|
| ||||
Posted in reply to bearophile | On Wednesday, 22 January 2014 at 13:56:22 UTC, bearophile wrote:
> Could be of interest, to ease the porting of C++ code to Cuda:
> http://www.alexstjohn.com/WP/2014/01/16/porting-cuda-6-0/
Yeah, gpu programming is going to develop faster in the coming years than dmd can keep track, probably.
I was more thinking about the simplicity:
Decorate a function call with some pragma and obtain a pregenerated CUDA or OpenCL string as a property on that function. That way the compiler only need to generate source-code and the runtime can do the rest. But hide it so well that it makes sense to write generic DMD code this way.
*shrug*
|
January 22, 2014 Re: GPGPUs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On Wednesday, 22 January 2014 at 15:21:37 UTC, Ola Fosheim Grøstad wrote: > source-code and the runtime can do the rest. But hide it so well that it makes sense to write generic DMD code this way. You might want to generate code for coprocessors too, like http://www.parallella.org/ Or FPGUs... Or send it over to a small cluster on Amazon... Basically being able to write sensible DMD code for the CPU and then later configure it to ship off isolated computations to whatever computational resources you have available (on- or off-site) would be more interesting than pure GPGPU which probably is going to be out of date real soon due to the shifts in technology. |
January 23, 2014 Re: GPGPUs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On Wednesday, 22 January 2014 at 15:26:26 UTC, Ola Fosheim Grøstad wrote:
> On Wednesday, 22 January 2014 at 15:21:37 UTC, Ola Fosheim Grøstad wrote:
>> source-code and the runtime can do the rest. But hide it so well that it makes sense to write generic DMD code this way.
>
> You might want to generate code for coprocessors too, like
>
> http://www.parallella.org/
>
> Or FPGUs...
> Or send it over to a small cluster on Amazon...
>
> Basically being able to write sensible DMD code for the CPU and then later configure it to ship off isolated computations to whatever computational resources you have available (on- or off-site) would be more interesting than pure GPGPU which probably is going to be out of date real soon due to the shifts in technology.
Why not just generate SPIR, HSAIL or PTX code instead ?
--
Paulo
|
January 23, 2014 Re: GPGPUs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Paulo Pinto | On Thursday, 23 January 2014 at 11:50:19 UTC, Paulo Pinto wrote:
>
> Why not just generate SPIR, HSAIL or PTX code instead ?
>
> --
> Paulo
We advertised an internship at my work to look at using D for GPUs in HPC (I work at the Swiss National Supercomputing Centre, which recently acquired are rather large GPU-based system). We do a lot of C++ meta-programming to generate portable code that works on both CPU and GPUs. D looks like it could make this much less of a pain (because C++ meta programming gets very tired after a short while). From what I can see, it should be possible to use CTFE and string mixins to generate full OpenCL kernels from straight D code.
One of the main issues we also have with C++ is that our users are intimidated by it, and exposure to the nasty side effects libraries written with meta programming do little to convince them (like the error messages, and the propensity for even the best-designed C++ library to leak excessive amounts of boiler plate and templates into user code). Unfortunately this is a field where Fortran is still the dominant language.
The LLVM backend supports PTX generation, and Clang has full support for OpenGL. With those tools and some tinkering with the compiler, it might be possible to do some really neat things in D. And increase programmer productivity at the same time. Fortran sets the bar pretty low there!
If anybody knows undergrad or masters students in Europe who would be interested in a fully paid internship to work with D on big computers, get them in touch with us!
|
January 23, 2014 Re: GPGPUs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ben Cumming | On Thursday, 23 January 2014 at 19:34:06 UTC, Ben Cumming wrote:
> The LLVM backend supports PTX generation, and Clang has full support for OpenGL.
I mean OpenCL, not OpenGL.
|
January 23, 2014 Re: GPGPUs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ben Cumming | Am 23.01.2014 20:34, schrieb Ben Cumming: > On Thursday, 23 January 2014 at 11:50:19 UTC, Paulo Pinto wrote: >> >> Why not just generate SPIR, HSAIL or PTX code instead ? >> >> -- >> Paulo > > We advertised an internship at my work to look at using D for GPUs in > HPC (I work at the Swiss National Supercomputing Centre, which recently > acquired are rather large GPU-based system). We do a lot of C++ > meta-programming to generate portable code that works on both CPU and > GPUs. D looks like it could make this much less of a pain (because C++ > meta programming gets very tired after a short while). From what I can > see, it should be possible to use CTFE and string mixins to generate > full OpenCL kernels from straight D code. I did an internship at CERN during 2003-2004. Lots of interesting C++ being used there as well. > > One of the main issues we also have with C++ is that our users are > intimidated by it, and exposure to the nasty side effects libraries > written with meta programming do little to convince them (like the error > messages, and the propensity for even the best-designed C++ library to > leak excessive amounts of boiler plate and templates into user code). I still like C++, but with C++14 and whatever might come in C++17 it might just be too much for any sane developer. :\ > Unfortunately this is a field where Fortran is still the dominant language. Yep, ATLAS still had lots of it. > > The LLVM backend supports PTX generation, and Clang has full support for > OpenGL. With those tools and some tinkering with the compiler, it might > be possible to do some really neat things in D. And increase programmer > productivity at the same time. Fortran sets the bar pretty low there! > > If anybody knows undergrad or masters students in Europe who would be > interested in a fully paid internship to work with D on big computers, > get them in touch with us! > I'll pass the info around. -- Paulo |
Copyright © 1999-2021 by the D Language Foundation