| Thread overview | |||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
July 07, 2007 GPUs and Array Operations | ||||
|---|---|---|---|---|
| ||||
Lately I've been learning about GPU's, shaders, and general purpose GPU computation. I'm still just getting introduced to it and haven't gotten very deep yet, so there's probably a few of you out there who know a lot more than me about this subject. It's probably been discussed before in this news group, but I've been thinking about how important GPU's will be in the coming years. For those who may not know, GPU performance has been improving at a rate faster than Moore's Law. Current high-end GPU's have many times more floating point performance than than high-end CPU's. The latest GPU's from NVidia and AMD/ATI brag a massive 500 and 400 single-precision gigaflops respectively. Traditionally GPU's were used for graphics only. But recently GPU's have been used for general purpose computation as well. The newer GPU's are including general purpose computation in design considerations. The problem with GPU programming is that computation is radically different from a conventional CPU. Because of the way the hardware is designed, there are more restrictions for GPU programs. For example, there are no function pointers, no virtual methods, and hence no OOP. There is no branching. Because of this conditional statements are highly inefficient and should be avoided. Because of these constraints, special purpose programming languages are required to program natively on a GPU. These special purpose programming languages are called shading languages, and include Cg, HLSL and GLSL. GPU computation is performed on data streams in parallel, where operations on each item in the stream is independent. GPU's work most effectively on large arrays of data. The proposed "array operations" feature in D has been discussed a lot. It is even mentioned in the "future directions" page on the D web site. However, I don't remember the details of the array operations feature. What are the design goals of the this feature? To leverage multi-cores and SSE? Are GPU's also a consideration? There are already C++ libraries available that provide general purpose computation using GPU's without shader programming. When it comes time to implement array operations in D, I feel that GPU's should be the primary focus. (However, I'm not saying that multicore CPU's or SSE should be ignored.) Design goals should be performance, simplicity, and flexibility. Thoughts? -Craig | ||||
July 08, 2007 Re: GPUs and Array Operations | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Craig Black | GPGPU isn't generally something you want to compiler to handle. Well, at least it should not be enabled by default. The first thing a GPU needs to handle is the graphics and no program should be hogging up GPU resources for anything else by default. I think its more worthwhile to let the compiler take as much advantage as possible of all available instruction sets the CPUs have to offer. But then again, I don't exactly know in what degree this already applies to the D compiler. Of course everybody is free and encouraged to start a nice project to simplify the implementation of GPGPU in D :) "Craig Black" <craigblack2@cox.net> wrote in message news:f6p829$1l06$1@digitalmars.com... > Lately I've been learning about GPU's, shaders, and general purpose GPU computation. I'm still just getting introduced to it and haven't gotten very deep yet, so there's probably a few of you out there who know a lot more than me about this subject. It's probably been discussed before in this news group, but I've been thinking about how important GPU's will be in the coming years. > > For those who may not know, GPU performance has been improving at a rate faster than Moore's Law. Current high-end GPU's have many times more floating point performance than than high-end CPU's. The latest GPU's from NVidia and AMD/ATI brag a massive 500 and 400 single-precision gigaflops respectively. Traditionally GPU's were used for graphics only. But recently GPU's have been used for general purpose computation as well. The newer GPU's are including general purpose computation in design considerations. > > The problem with GPU programming is that computation is radically different from a conventional CPU. Because of the way the hardware is designed, there are more restrictions for GPU programs. For example, there are no function pointers, no virtual methods, and hence no OOP. There is no branching. Because of this conditional statements are highly inefficient and should be avoided. Because of these constraints, special purpose programming languages are required to program natively on a GPU. These special purpose programming languages are called shading languages, and include Cg, HLSL and GLSL. > > GPU computation is performed on data streams in parallel, where operations on each item in the stream is independent. GPU's work most effectively on large arrays of data. The proposed "array operations" feature in D has been discussed a lot. It is even mentioned in the "future directions" page on the D web site. However, I don't remember the details of the array operations feature. What are the design goals of the this feature? To leverage multi-cores and SSE? Are GPU's also a consideration? > > There are already C++ libraries available that provide general purpose computation using GPU's without shader programming. When it comes time to implement array operations in D, I feel that GPU's should be the primary focus. (However, I'm not saying that multicore CPU's or SSE should be ignored.) Design goals should be performance, simplicity, and flexibility. > > Thoughts? > > -Craig | |||
July 08, 2007 Re: GPUs and Array Operations | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Craig Black | Craig Black wrote:
> Lately I've been learning about GPU's, shaders, and general purpose GPU computation. I'm still just getting introduced to it and haven't gotten very deep yet, so there's probably a few of you out there who know a lot more than me about this subject. It's probably been discussed before in this news group, but I've been thinking about how important GPU's will be in the coming years.
>
> For those who may not know, GPU performance has been improving at a rate faster than Moore's Law. Current high-end GPU's have many times more floating point performance than than high-end CPU's. The latest GPU's from NVidia and AMD/ATI brag a massive 500 and 400 single-precision gigaflops respectively. Traditionally GPU's were used for graphics only. But recently GPU's have been used for general purpose computation as well. The newer GPU's are including general purpose computation in design considerations.
>
> The problem with GPU programming is that computation is radically different from a conventional CPU. Because of the way the hardware is designed, there are more restrictions for GPU programs. For example, there are no function pointers, no virtual methods, and hence no OOP. There is no branching. Because of this conditional statements are highly inefficient and should be avoided. Because of these constraints, special purpose programming languages are required to program natively on a GPU. These special purpose programming languages are called shading languages, and include Cg, HLSL and GLSL.
>
> GPU computation is performed on data streams in parallel, where operations on each item in the stream is independent. GPU's work most effectively on large arrays of data. The proposed "array operations" feature in D has been discussed a lot. It is even mentioned in the "future directions" page on the D web site. However, I don't remember the details of the array operations feature. What are the design goals of the this feature? To leverage multi-cores and SSE? Are GPU's also a consideration?
>
> There are already C++ libraries available that provide general purpose computation using GPU's without shader programming. When it comes time to implement array operations in D, I feel that GPU's should be the primary focus. (However, I'm not saying that multicore CPU's or SSE should be ignored.) Design goals should be performance, simplicity, and flexibility.
>
> Thoughts?
>
> -Craig
While you can do much of the CPU work with the GPU I think in its present state it requires a very custom program. For instance, there are band-width issues which means you don't get the results back till the next frame. Therefore your program has to be designed to work in a particular way.
Secondly the GPU is being used for other things, so the time at which you use these operations is critical, it can't just happen at any stage otherwise you blow away the current state of the GPU.
Thirdly, you can only run a couple of these huge processing operations on the GPU at once, or come with a smart way to put them all into the same operation. Therefore usability of this is limited.
Anyway I think it seems more like an API sort of thing so that the user has control over when the GUI is used.
That's my present understanding. Maybe things will change when AMD combines the GPU into the CPU.
| |||
July 09, 2007 Re: GPUs and Array Operations | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Saaa | You are quite right. The compiler should never assume that the programmer wants GPU computation. I was thinking of a feature that would do GPU computation optionally. I don't know exactly how the syntax would be though. It probably will end up being a library and not something supported directly by the compiler. -Craig "Saaa" <empty@needmail.com> wrote in message news:f6q4ac$h39$1@digitalmars.com... > GPGPU isn't generally something you want to compiler to handle. Well, at > least it should not be enabled by default. The first thing a GPU needs to > handle is the graphics and no program should be hogging up GPU resources > for anything else by default. > I think its more worthwhile to let the compiler take as much advantage as > possible of all available instruction sets the CPUs have to offer. But > then again, I don't exactly know in what degree this already applies to > the D compiler. > Of course everybody is free and encouraged to start a nice project to > simplify the implementation of GPGPU in D :) > > > "Craig Black" <craigblack2@cox.net> wrote in message news:f6p829$1l06$1@digitalmars.com... >> Lately I've been learning about GPU's, shaders, and general purpose GPU computation. I'm still just getting introduced to it and haven't gotten very deep yet, so there's probably a few of you out there who know a lot more than me about this subject. It's probably been discussed before in this news group, but I've been thinking about how important GPU's will be in the coming years. >> >> For those who may not know, GPU performance has been improving at a rate faster than Moore's Law. Current high-end GPU's have many times more floating point performance than than high-end CPU's. The latest GPU's from NVidia and AMD/ATI brag a massive 500 and 400 single-precision gigaflops respectively. Traditionally GPU's were used for graphics only. But recently GPU's have been used for general purpose computation as well. The newer GPU's are including general purpose computation in design considerations. >> >> The problem with GPU programming is that computation is radically different from a conventional CPU. Because of the way the hardware is designed, there are more restrictions for GPU programs. For example, there are no function pointers, no virtual methods, and hence no OOP. There is no branching. Because of this conditional statements are highly inefficient and should be avoided. Because of these constraints, special purpose programming languages are required to program natively on a GPU. These special purpose programming languages are called shading languages, and include Cg, HLSL and GLSL. >> >> GPU computation is performed on data streams in parallel, where operations on each item in the stream is independent. GPU's work most effectively on large arrays of data. The proposed "array operations" feature in D has been discussed a lot. It is even mentioned in the "future directions" page on the D web site. However, I don't remember the details of the array operations feature. What are the design goals of the this feature? To leverage multi-cores and SSE? Are GPU's also a consideration? >> >> There are already C++ libraries available that provide general purpose computation using GPU's without shader programming. When it comes time to implement array operations in D, I feel that GPU's should be the primary focus. (However, I'm not saying that multicore CPU's or SSE should be ignored.) Design goals should be performance, simplicity, and flexibility. >> >> Thoughts? >> >> -Craig > > | |||
July 09, 2007 Re: GPUs and Array Operations | ||||
|---|---|---|---|---|
| ||||
Posted in reply to janderson | "janderson" <askme@me.com> wrote in message news:f6rkr3$nbm$1@digitalmars.com... > Craig Black wrote: >> Lately I've been learning about GPU's, shaders, and general purpose GPU computation. I'm still just getting introduced to it and haven't gotten very deep yet, so there's probably a few of you out there who know a lot more than me about this subject. It's probably been discussed before in this news group, but I've been thinking about how important GPU's will be in the coming years. >> >> For those who may not know, GPU performance has been improving at a rate faster than Moore's Law. Current high-end GPU's have many times more floating point performance than than high-end CPU's. The latest GPU's from NVidia and AMD/ATI brag a massive 500 and 400 single-precision gigaflops respectively. Traditionally GPU's were used for graphics only. But recently GPU's have been used for general purpose computation as well. The newer GPU's are including general purpose computation in design considerations. >> >> The problem with GPU programming is that computation is radically different from a conventional CPU. Because of the way the hardware is designed, there are more restrictions for GPU programs. For example, there are no function pointers, no virtual methods, and hence no OOP. There is no branching. Because of this conditional statements are highly inefficient and should be avoided. Because of these constraints, special purpose programming languages are required to program natively on a GPU. These special purpose programming languages are called shading languages, and include Cg, HLSL and GLSL. >> >> GPU computation is performed on data streams in parallel, where operations on each item in the stream is independent. GPU's work most effectively on large arrays of data. The proposed "array operations" feature in D has been discussed a lot. It is even mentioned in the "future directions" page on the D web site. However, I don't remember the details of the array operations feature. What are the design goals of the this feature? To leverage multi-cores and SSE? Are GPU's also a consideration? >> >> There are already C++ libraries available that provide general purpose computation using GPU's without shader programming. When it comes time to implement array operations in D, I feel that GPU's should be the primary focus. (However, I'm not saying that multicore CPU's or SSE should be ignored.) Design goals should be performance, simplicity, and flexibility. >> >> Thoughts? >> >> -Craig > > While you can do much of the CPU work with the GPU I think in its present state it requires a very custom program. For instance, there are band-width issues which means you don't get the results back till the next frame. Therefore your program has to be designed to work in a particular way. > > Secondly the GPU is being used for other things, so the time at which you use these operations is critical, it can't just happen at any stage otherwise you blow away the current state of the GPU. > > Thirdly, you can only run a couple of these huge processing operations on the GPU at once, or come with a smart way to put them all into the same operation. Therefore usability of this is limited. > > Anyway I think it seems more like an API sort of thing so that the user has control over when the GUI is used. > > That's my present understanding. Maybe things will change when AMD combines the GPU into the CPU. You are right, but don't underplay the importance of GPU computation. GPU's are becoming more and more powerful, so they will be able to increasingly handle more and more general purpose computation. Modern games use GPU's for more than just rendering, and it has become a design goal of many game engines to transfer more of the workload to the GPU. Some domains, such as scientific simulations, don't care about graphics at all and would rather use the GPU for computation exclusively. At any rate I think we should keep our eyes peeled for opportunities to leverage this capability. I personally am trying to learn more about it myself. -Craig -Craig | |||
July 10, 2007 Re: GPUs and Array Operations | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Craig Black | Take a look on the vectorization suggestion on the wish list http://all-technology.com/eigenpolls/dwishlist/index.php?it=10 this notation lets you specify vector calculation in such a way that the compiler can optimize them and let them run on the GPU if that is preferred. | |||
July 10, 2007 Re: GPUs and Array Operations | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Knud Soerensen | Interesting syntax. I like it, but in order to support GPU's there would have to be some way that the programmer specified GPU or CPU computation. Perhaps a new keyword or pragma? I'm fuzzy on the details of how this could actually be done on the GPU. The compiler would have to somehow generate either high-level or low-level shading language routines for each statement. That means each function called would also have to be converted to GPU code. There would obviously be some restrictions as to what kinds of language features could be used when the GPU option is enabled. For example, no function pointers, no heap allocations, etc, etc. I'm sure there would end up being a lot. I suppose special floating point types could automatically be mapped. For example the compiler could automatically convert the static array float[3] to the shading language type float3. -Craig "Knud Soerensen" <4tuu4k002@sneakemail.com> wrote in message news:f6uiqk$2qad$2@digitalmars.com... > Take a look on the vectorization suggestion on the wish list > http://all-technology.com/eigenpolls/dwishlist/index.php?it=10 > this notation lets you specify vector calculation in such > a way that the compiler can optimize them and let them run on the GPU > if that is preferred. | |||
July 10, 2007 Re: GPUs and Array Operations | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Craig Black | Another idea. What if this could be done using the recently added mixin feature? Then you could use a shading language directly rather than trying to convert D code to a shading language. I've never used mixins, so I can't even think of how the syntax would look. Is this a practical idea? -Craig | |||
July 10, 2007 Re: GPUs and Array Operations | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Craig Black | Craig Black wrote: > Another idea. What if this could be done using the recently added mixin feature? Then you could use a shading language directly rather than trying to convert D code to a shading language. I've never used mixins, so I can't even think of how the syntax would look. Is this a practical idea? > > -Craig A solution like Blade* should be possible, especially when D gets some macro syntactic sugar, but somewhat involved probably. Converting D to a shading language does seem what you want for more general purpose computation. Nvidia has made a compiler specifically to do this in C, called Cuda. * http://www.dsource.org/projects/mathextra/browser/trunk/blade | |||
July 10, 2007 Re: GPUs and Array Operations | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Lutger | Lutger wrote: > Craig Black wrote: >> Another idea. What if this could be done using the recently added mixin feature? Then you could use a shading language directly rather than trying to convert D code to a shading language. I've never used mixins, so I can't even think of how the syntax would look. Is this a practical idea? >> >> -Craig > > A solution like Blade* should be possible, especially when D gets some macro syntactic sugar, but somewhat involved probably. Converting D to a shading language does seem what you want for more general purpose computation. Nvidia has made a compiler specifically to do this in C, called Cuda. > > > * http://www.dsource.org/projects/mathextra/browser/trunk/blade I was going to write a comment about Sh and how it used C++ and metaprogramming to trick shader programs into being compiled along with the rest of your source code, but apparently the RapidMind thing that was mentioned here a while ago is precisely the (commercial) evolution of Sh into more generic uses: """ As you are likely aware, Sh started from years of research at the University of Waterloo. In 2004, RapidMind Inc. (Serious Hack Inc. at the time) was formed to commercialize this research, which continued to maintain Sh leading to the currently released version. The people behind Sh felt strongly that the fruits of publicly-sponsored research should be open sourced. Our development has now shifted to be commercially based, and we are currently working on releasing the RapidMind Development Platform, which has roots in Sh but signifies significant leaps beyond it, such as more general-purpose applicability and support for non-GPU processors such as the Cell Broadband Engine. """ -- http://libsh.org/index.html http://www.rapidmind.net/software.php --bb | |||
Copyright © 1999-2021 by the D Language Foundation
Permalink
Reply