www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - "shared" status

reply Luis Panadero =?UTF-8?B?R3VhcmRlw7Fv?= <luis.panadero gmail.com> writes:
What is the status of "shared" types ?

I try it with gdmd v4.6.3
And I not get any warring/error when I do anything over a shared variable 
without using atomicOp. It's normal ?

shared ushort ram[ram_size];
....
....
ram[i] = cast(ushort) (bytes[0] | bytes[1] << 8);

-- 
I'm afraid that I have a blog: http://zardoz.es 
Apr 14 2012
next sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Saturday, 14 April 2012 at 10:48:16 UTC, Luis Panadero 
Guardeño wrote:
 What is the status of "shared" types ?

 I try it with gdmd v4.6.3
 And I not get any warring/error when I do anything over a 
 shared variable
 without using atomicOp. It's normal ?

 shared ushort ram[ram_size];
 ....
 ....
 ram[i] = cast(ushort) (bytes[0] | bytes[1] << 8);

Shared is at the moment (in my opinion anyways) not useable. Very little in Phobos is shared friendly. Most benefits of shared aren't implemented yet. I personally avoid it.
Apr 15 2012
parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/16/2012 03:57 AM, Zardoz wrote:

 So, if I need to share a array of 0x10000 elements between 3 or more
 threads, how should do it ?

1) The following program starts four threads to fill different parts of a shared array: import std.stdio; import std.concurrency; import core.thread; void numberFiller(shared(int)[] area, int fillValue) { foreach (ref number; area) { number = fillValue; } } void main() { enum totalNumbers = 0x10; auto numbers = new shared(int)[totalNumbers]; enum totalThreads = 4; enum numbersPerThread = totalNumbers / totalThreads; foreach (i; 0 .. totalThreads) { immutable start = i * numbersPerThread; immutable fillValue = i; spawn(&numberFiller, numbers[start .. start + numbersPerThread], cast(int)fillValue); } thread_joinAll(); writeln(numbers); } The output: [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3] 2) The program above is being careful to limit the threads to different parts of the array. In other cases lock-based multi-threading can be used. The following program allows four thread append to a single array as they get a hold of the slice: import std.stdio; import std.concurrency; import core.thread; import std.random; class Job { int[] * slice; size_t count; this(ref int[] slice, size_t count) { this.slice = &slice; this.count = count; } } void numberAppender(shared(Job) job, int appendValue) { foreach (i; 0 .. job.count) { synchronized (job) { *job.slice ~= appendValue; } Thread.sleep(dur!"msecs"(uniform(1,100))); } } void main() { enum totalNumbers = 0x10; int[] numbers; enum totalThreads = 4; enum numbersPerThread = totalNumbers / totalThreads; auto job = new shared(Job)(numbers, numbersPerThread); foreach (i; 0 .. totalThreads) { int appendValue = i; spawn(&numberAppender, job, appendValue); } thread_joinAll(); writeln(numbers); } The output should be similar to this: [0, 1, 3, 2, 1, 1, 2, 0, 3, 1, 3, 0, 2, 3, 0, 2] (Note: I wish there were 'ref' variables in D. That's why Job.slice above had to be a pointer.) 3) Better than the two approaches above may be to use message passing and have the threads produce separate results to be either combined later or simply used separately: import std.stdio; import std.concurrency; void arrayMaker(Tid owner, int count, int value) { immutable(int)[] result; foreach (i; 0 .. count) { result ~= value; } owner.send(result); } void main() { enum totalNumbers = 0x10; enum totalThreads = 4; enum numbersPerThread = totalNumbers / totalThreads; foreach (i; 0 .. totalThreads) { int value = i; spawn(&arrayMaker, thisTid, numbersPerThread, value); } immutable(int[])[] results; foreach (i; 0 .. totalThreads) { auto result = receiveOnly!(immutable(int)[])(); results ~= result; } writeln(results); } The output should be similar to this: [[0, 0, 0, 0], [1, 1, 1, 1], [3, 3, 3, 3], [2, 2, 2, 2]] (Note: I could not pass the results as shared int slices so I went back to immutable.) Ali
Apr 16 2012
parent reply Luis <luis.panadero gmail.com> writes:
Thanks! It's very useful.
 
Ali Çehreli wrote:

 synchronized (job) {
 *job.slice ~= appendValue;
 }

I could do lock-based access to shared data.
Apr 17 2012
parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/17/2012 06:05 AM, Luis wrote:
 Thanks! It's very useful.

 Ali Çehreli wrote:

 synchronized (job) {
 *job.slice ~= appendValue;
 }

I could do lock-based access to shared data.

Yes. I've used the same Job object there but any class is a lock. (This has been a new concept for me. The "lock part" of the object is called the monitor.) Ali
Apr 17 2012
prev sibling next sibling parent Zardoz <luis.panadero gmail.com> writes:
El Sun, 15 Apr 2012 23:05:55 +0200, Kapps escribió:

 On Saturday, 14 April 2012 at 10:48:16 UTC, Luis Panadero Guardeño
 wrote:
 What is the status of "shared" types ?

 I try it with gdmd v4.6.3
 And I not get any warring/error when I do anything over a shared
 variable
 without using atomicOp. It's normal ?

 shared ushort ram[ram_size];
 ....
 ....
 ram[i] = cast(ushort) (bytes[0] | bytes[1] << 8);

Shared is at the moment (in my opinion anyways) not useable. Very little in Phobos is shared friendly. Most benefits of shared aren't implemented yet. I personally avoid it.

So, if I need to share a array of 0x10000 elements between 3 or more threads, how should do it ?
Apr 16 2012
prev sibling parent "Dejan Lekic" <dejan.lekic gmail.com> writes:
On Saturday, 14 April 2012 at 10:48:16 UTC, Luis Panadero 
Guardeño wrote:
 What is the status of "shared" types ?

 I try it with gdmd v4.6.3
 And I not get any warring/error when I do anything over a 
 shared variable
 without using atomicOp. It's normal ?

 shared ushort ram[ram_size];
 ....
 ....
 ram[i] = cast(ushort) (bytes[0] | bytes[1] << 8);

Shared is crucial for concurrency/parallelism since the switch to the thread local storage as default storage. Immutable values are IMPLICITLY SHARED while for your mutable data you have to explicitly use "shared" keyword. This basically means that SHARED data are used everywhere in D applications nowadays.
Apr 17 2012