June 14, 2017
On 31 May 2017 at 05:32, H. S. Teoh via Digitalmars-d-announce < digitalmars-d-announce@puremagic.com> wrote:

> On Tue, May 30, 2017 at 07:23:42PM +0000, Jack Stouffer via Digitalmars-d-announce wrote:
> > On Tuesday, 30 May 2017 at 18:06:56 UTC, Walter Bright wrote:
> > > I fear the conversation will go like this, like it has for me:
> > >
> > >  N: DCompute
> > >  W: What's DCompute?
> > >  N: Enables GPU programming with D
> > >  W: Cool!
> > >
> > > instead of:
> > >
> > >  N: D-GPU
> > >  W: Cool! I can use D to program GPUs!
> >
> > This was literally what happened to me when I saw the headline.
>
> I confess the first conversation was also my reaction when I saw the name "DCompute".  I thought, "oh, this is some kind of scientific computation library, right? That comes with a set of standard numerical algorithms?".  Programming GPUs did not occur to me at all.
>

I'm becoming suspicious that people who don't interact with this technology
just don't know the terminology deployed in the field.
I think this is natural, and part of learning anything new.
But if it's not possible to select terminology that is intuitive to both
parties, *surely* the users/consumers of some technology should be first
priority in terms of not confusing them with industry-non-standard
terminology?
Users who are unfamiliar have already demonstrated that they likely have no
specific interest in a field (or they'd be aware of the conventional
terminology at least), and why would you cater to that crowd as the expense
of the actual users?


June 14, 2017
On 1 June 2017 at 09:28, Nicholas Wilson via Digitalmars-d-announce < digitalmars-d-announce@puremagic.com> wrote:

> On Wednesday, 31 May 2017 at 22:15:33 UTC, Wulfklaue wrote:
>
>>
>> And so what if people start a big discussion about the name. If only 10% of those people come to the D site from a either language, its a instant success. Positive or negative marketing is a win-win in this case.
>>
>
> For this discussion it is not the case, I haven't seen any new names. So (everybody) please discontinue derailing this thread.


Oops, sorry! Just caught up >_<


June 16, 2017
On Wednesday, 14 June 2017 at 05:43:01 UTC, Manu wrote:
> See, I would have a very different conversation:
>
>  N: DCompute
>  M: Awesome, I've been waiting!
>
> instead of:
>
>  N: D-GPU
>  M: What's that... is it, like, a rendering library?
>  N: No, it's a 'compute' library.
>  M: Ohhh, awesome! I've been waiting!
>
> ;)
Also "D-GPU" implies that it's only for GPUs. Granted, there's a lot of similarities (early 3D accelerators in arcade cabinets were often built from multiple DSPs, GPUs always have VLIW and FMA capabilities, FPGAs can function as either as DSP or GPU with limitations, etc), but we still need some distinction. Even CPUs have OpenCL capabilities (my good old Athlon 64 x2 supports up to 1.1).

Other than I'm planning to using DCompute to implement "GPU blitter" (as I couldn't find any hardware acceleration API for raster graphics besides the long obsolete DirectDraw), but I'm also thinking on if I could implement some physical modelling audio engine, that would compile reverberation in real time for a virtual room, enabling it to be used in games (game audio haven't advanced as much as graphics unfortunately).
June 16, 2017
On Friday, 16 June 2017 at 01:19:56 UTC, solidstate1991 wrote:
> Other than I'm planning to using DCompute to implement "GPU blitter" (as I couldn't find any hardware acceleration API for raster graphics besides the long obsolete DirectDraw), but I'm also thinking on if I could implement some physical modelling audio engine, that would compile reverberation in real time for a virtual room, enabling it to be used in games (game audio haven't advanced as much as graphics unfortunately).

Sounds cool! (BTW are you ZILtoid1991 on GitHub?)

Do let me know what features (support for images etc.) you need to do the projects you are working on. There is a lot to be done, but most of the features I am looking to add can be developed independently. Of course collaboration will accelerate development.

Also, I am looking for people who would be interested talking at IWOCL, the international workshop on OpenCL, about the user experience of dcompute (productivity, optimisation, debugging, API intuition/ease of use) in Edinburgh, May 15-17 next year, if anyone that has interesting projects they have made progress on.

June 19, 2017
On Monday, 29 May 2017 at 09:33:05 UTC, Nicholas Wilson wrote:
> Hi all,
>
> I'm happy to announce that the dcompute modifications to LDC are now in the master branch of LDC. The dcompute extensions require LLVM 3.9.1 or greater for NVPTX/CUDA and my fork[1] of LLVM for SPIRV.
>
> Someone (sorry I've forgotten who!) at dconf said they'd make a docker image of the dependencies (ldc llvm), if you're reading please let me know! Or if someone else wants to do it thats good too.
>
> I'm still quite busy until July (honours thesis), but if anyone wanting to contribute to either the runtime stuff [2](all D), LDC [3] or LLVM [1](mostly C++) I'm happy to answer any questions, providing testing and performance feedback on diverse systems is also appreciated. Feel free to drop a line at https://gitter.im/libmir/public
>
> [1]: https://github.com/thewilsonator/llvm/tree/compute
> [2]: https://github.com/libmir/dcompute
> [3]: https://github.com/ldc-developers/ldc

Hi,

I would like to know :
 - what we can do with the library name "dcompute" ?
Does this means that I can wrote In Dlang a piece of code and execute it on the GPU ?

 - where can I find some example ?

Thanks
June 19, 2017
On Monday, 19 June 2017 at 08:24:09 UTC, bioinfornatics wrote:
> On Monday, 29 May 2017 at 09:33:05 UTC, Nicholas Wilson wrote:
>> Hi all,
>>
>> I'm happy to announce that the dcompute modifications to LDC are now in the master branch of LDC. The dcompute extensions require LLVM 3.9.1 or greater for NVPTX/CUDA and my fork[1] of LLVM for SPIRV.
>>
>> Someone (sorry I've forgotten who!) at dconf said they'd make a docker image of the dependencies (ldc llvm), if you're reading please let me know! Or if someone else wants to do it thats good too.
>>
>> I'm still quite busy until July (honours thesis), but if anyone wanting to contribute to either the runtime stuff [2](all D), LDC [3] or LLVM [1](mostly C++) I'm happy to answer any questions, providing testing and performance feedback on diverse systems is also appreciated. Feel free to drop a line at https://gitter.im/libmir/public
>>
>> [1]: https://github.com/thewilsonator/llvm/tree/compute
>> [2]: https://github.com/libmir/dcompute
>> [3]: https://github.com/ldc-developers/ldc
>
> Hi,
>
> I would like to know :
>  - what we can do with the library name "dcompute" ?

The library enables you to launch kernels written with the accompanying complier extensions (the focus of this announcement). It also provides the intrinsics to enable writing the kernels.
> Does this means that I can wrote In Dlang a piece of code and execute it on the GPU ?

Yes, with some restrictions: recursion is prohibited, as are classes exceptions, the keyword 'synchronized' global variables (for now) and probably some others that I'm forgetting.
>
>  - where can I find some example ?

There are some examples on the wiki (https://github.com/libmir/dcompute/wiki), although they are likely incomplete and slightly out of date. I will be updating and greatly improving them as development progresses (continuing about halfway through July).

If you have any questions feel free to ask them on https://gitter.im/libmir/public.

April 18, 2018
On Monday, 19 June 2017 at 12:46:16 UTC, Nicholas Wilson wrote:
> On Monday, 19 June 2017 at 08:24:09 UTC, bioinfornatics wrote:
>>  [...]
>
> The library enables you to launch kernels written with the accompanying complier extensions (the focus of this announcement). It also provides the intrinsics to enable writing the kernels.
>> [...]
>
> Yes, with some restrictions: recursion is prohibited, as are classes exceptions, the keyword 'synchronized' global variables (for now) and probably some others that I'm forgetting.
>>  [...]
>
> There are some examples on the wiki (https://github.com/libmir/dcompute/wiki), although they are likely incomplete and slightly out of date. I will be updating and greatly improving them as development progresses (continuing about halfway through July).
>
> If you have any questions feel free to ask them on https://gitter.im/libmir/public.

I take a look at dcompute example and find any example how to interact with FPGAs!
Could we have a tutorial how to build a D program in order to works with FPGA ?

Thanks,

Best regards
April 18, 2018
On Wednesday, 18 April 2018 at 07:10:12 UTC, bioinfornatics wrote:
> On Monday, 19 June 2017 at 12:46:16 UTC, Nicholas Wilson wrote:
>> On Monday, 19 June 2017 at 08:24:09 UTC, bioinfornatics wrote:
>>>  [...]
>>
>> The library enables you to launch kernels written with the accompanying complier extensions (the focus of this announcement). It also provides the intrinsics to enable writing the kernels.
>>> [...]
>>
>> Yes, with some restrictions: recursion is prohibited, as are classes exceptions, the keyword 'synchronized' global variables (for now) and probably some others that I'm forgetting.
>>>  [...]
>>
>> There are some examples on the wiki (https://github.com/libmir/dcompute/wiki), although they are likely incomplete and slightly out of date. I will be updating and greatly improving them as development progresses (continuing about halfway through July).
>>
>> If you have any questions feel free to ask them on https://gitter.im/libmir/public.
>
> I take a look at dcompute example and find any example how to interact with FPGAs!
> Could we have a tutorial how to build a D program in order to works with FPGA ?
>
> Thanks,
>
> Best regards

From what I understand It should "just work" if you have an FPGA OpenCL runtime installed.

I'd love to test that but I lack both time and an FPGA to do it. I'll be improving dcompute significantly once I graduate and have the time to do so.
1 2 3 4 5
Next ›   Last »