www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - std.parallelism and multidimensional arrays

reply "Stefan Frijters" <sfrijters gmail.com> writes:
I have a code which does a lot of work on 2D/3D arrays, for which 
I use the 2.066 multidimensional slicing syntax through a fork of 
the Unstandard package [1].
Many times the order of operations doesn't matter and I thought I 
would give the parallelism module a try to try and get some easy 
speedups (I also use MPI, but that has some additional overhead).

The way I currently have my foreach loops set up, p is a 
size_t[2], the payload of the array v is double[9] and the array 
is indexed directly with a size_t[2] array and all works fine:

foreach(immutable p, ref v, arr) { double[9] stuff; arr[p] = 
stuff; }

If I naively try

foreach(immutable p, ref v, parallel(arr)) { ... }

I first get errors of the type "Error: foreach: cannot make v 
ref". I do not understand where that particular problem comes 
from, but I can possibly live without the ref, so I went for

foreach(immutable p, v, parallel(arr)) { ... }

Which gets me "Error: no [] operator overload for type 
(complicated templated type of some wrapper struct I have for 
arr)". I'm guessing it doesn't like that there is no such thing 
as a simple one-dimensional slicing operation for a 
multidimensional array?
Should I define an opSlice function that takes the usual two 
size_t arguments for the upper and lower bounds and doesn't 
require a dimension template argument and somehow map this to my 
underlying two-dimensional array? Will it need an opIndex 
function that takes only takes a single size_t as well?
Or is this just taking the simple parallel(...) too far and 
should I try to put something together myself using lower-level 
constructs?

Any hints would be appreciated!

[1] http://code.dlang.org/packages/unstandard
May 22 2015
parent "Vlad Levenfeld" <vlevenfeld gmail.com> writes:
On Friday, 22 May 2015 at 10:54:36 UTC, Stefan Frijters wrote:
 I have a code which does a lot of work on 2D/3D arrays, for 
 which I use the 2.066 multidimensional slicing syntax through a 
 fork of the Unstandard package [1].
 Many times the order of operations doesn't matter and I thought 
 I would give the parallelism module a try to try and get some 
 easy speedups (I also use MPI, but that has some additional 
 overhead).

 The way I currently have my foreach loops set up, p is a 
 size_t[2], the payload of the array v is double[9] and the 
 array is indexed directly with a size_t[2] array and all works 
 fine:

 foreach(immutable p, ref v, arr) { double[9] stuff; arr[p] = 
 stuff; }

 If I naively try

 foreach(immutable p, ref v, parallel(arr)) { ... }

 I first get errors of the type "Error: foreach: cannot make v 
 ref". I do not understand where that particular problem comes 
 from, but I can possibly live without the ref, so I went for

 foreach(immutable p, v, parallel(arr)) { ... }

 Which gets me "Error: no [] operator overload for type 
 (complicated templated type of some wrapper struct I have for 
 arr)". I'm guessing it doesn't like that there is no such thing 
 as a simple one-dimensional slicing operation for a 
 multidimensional array?
 Should I define an opSlice function that takes the usual two 
 size_t arguments for the upper and lower bounds and doesn't 
 require a dimension template argument and somehow map this to 
 my underlying two-dimensional array? Will it need an opIndex 
 function that takes only takes a single size_t as well?
 Or is this just taking the simple parallel(...) too far and 
 should I try to put something together myself using lower-level 
 constructs?

 Any hints would be appreciated!

 [1] http://code.dlang.org/packages/unstandard
I'd define a "flatten range" adaptor that presents n-dimensional ranges as 1d range that traverses the original array indices lexicographically, if that makes sense for your app.
May 22 2015