www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - threads, fibers and GPU kernels

reply Suliman <evermind live.ru> writes:
Modern GPU have thousands of GPU kernels, it's far from CPU 
kernels, but it's interesting for me is there any chance that in 
future they be used in same maner as CPU kernels?

If yes is there any reasons of exit for fibers? Or it would be 
easier to map one thread to one kernel? On system with 1k 
kernels/cores I do not see any reason for fibers for exiten.

Also here was few topics about D3. But what about about you are 
thinging about threading model? I think in next 10 yers even CPU 
would have 32-64 cores.
Aug 07 2017
parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Monday, 7 August 2017 at 07:38:34 UTC, Suliman wrote:
 Modern GPU have thousands of GPU kernels, it's far from CPU 
 kernels, but it's interesting for me is there any chance that 
 in future they be used in same maner as CPU kernels?
Do you mean threads? Not really, they are more like SIMD lanes that together are more analogous to a CPU thread. See John Colvin's 2015/2016 DConf talks.
 If yes is there any reasons of exit for fibers? Or it would be 
 easier to map one thread to one kernel? On system with 1k 
 kernels/cores I do not see any reason for fibers for exiten.
Fibres are for I/O bound problems related to the overheads of task switching. They offer no benefits (unless you want stateful generators) for compute bound problems.
 Also here was few topics about D3. But what about about you are 
 thinging about threading model? I think in next 10 yers even 
 CPU would have 32-64 cores.
Aug 07 2017
parent John Colvin <john.loughran.colvin gmail.com> writes:
On Monday, 7 August 2017 at 08:57:35 UTC, Nicholas Wilson wrote:
 On Monday, 7 August 2017 at 07:38:34 UTC, Suliman wrote:
 Modern GPU have thousands of GPU kernels, it's far from CPU 
 kernels, but it's interesting for me is there any chance that 
 in future they be used in same maner as CPU kernels?
Do you mean threads? Not really, they are more like SIMD lanes that together are more analogous to a CPU thread. See John Colvin's 2015/2016 DConf talks.
As deadalnix reminded me after my 2016 talk, the wider picture of the GPU is SIMT, not SIMD, but from a computation point of view I find I don't need to conceptually separate the two so much. In my experience, most things that work well on GPU end up working very like SIMD on an OoO CPU when you do them right, even if they don't look like it in the code.
Aug 07 2017