www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - A few questions about safe concurrent programming assumptions

reply "Nicholas Smith" <nmsmith65 gmail.com> writes:
Hi there,

I have a few questions about what is safe to assume when writing 
concurrent code in D with data sharing (as opposed to message 
passing).

After doing a fair amount of reading, I'm still slightly hazy 
about what shared does and doesn't guarantee. Here is the only 
assumption I understand to be valid:

* Reads and writes to shared variables with a size equal to or 
less than the word size of the machine are atomic and are visible 
to all other threads immediately.

Now, in Andrei's D book he also states that sequential 
consistency is guaranteed for operations on shared data, however 
I understand that this is not currently implemented by (any?) 
compiler and perhaps never will be, what with the discussions 
about making changes to the semantics of shared and all.

So then this code is not safe, assuming the methods are executed 
on different threads:

shared int x = 0;
shared bool complete = false;

void foo()
{
     x = 7; // atomic
     complete = true; // atomic
}

void bar()
{
     while (!complete) {}

     writeln(x); // undefined output (0 or 7)
}

But then I understand the core.atomic module incorporates the 
necessary memory barriers to make this work, so we can replace 
foo() by:

void foo()
{
     x.atomicStore(7);
     complete.atomicStore(true);
}

and achieve the intended behaviour (maybe only one of those two 
modifications were actually needed here?). Or possibly:

void foo()
{
     x = 7; // atomic
     atomicFence(); // from core.atomic again
     complete = true; // atomic
}

Do these modifications achieve the desired result? I know there 
are other ways to go about this. I think one (bad) way is putting 
a mutex around every read/write (a mutex also acts as a memory 
barrier, right?), and I suppose in this case, message passing! 
(But let's pretend the data sharing is more complex)

My other question about shared is: what does the shared qualifier 
mean when applied to a class definition? e.g.

shared class Cat {...}

Does it just force all references to Cat instances to be shared 
by default and make all methods of Cat shared? Andrei uses it in 
his book when talking about lock-free programming with cas, 
however he seems to make the assumption (well, explicitly states) 
that sequential consistency is assured within the methods of such 
a class. I'm quite confused about what it actually means, 
especially since shared apparently does not currently use memory 
barriers.
Oct 02 2013
next sibling parent "Nicholas Smith" <nmsmith65 gmail.com> writes:
So I suppose shared is confusing to everyone else, too. :)

I'll just wing it and fill my program with rare but devastating 
bugs ;)
Oct 06 2013
prev sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, October 03, 2013 08:34:00 Nicholas Smith wrote:
 * Reads and writes to shared variables with a size equal to or
 less than the word size of the machine are atomic and are visible
 to all other threads immediately.
There are _no_ guarantees of atomicity with shared. Yes, on some architectures, writing a word size might be atomic, but the language guarantees no such thing. If you use shared, then you _must_ use mutexes or synchronized blocks to protect access to any shared variables - either that or you have to mess around with core.atomic, which is the kind of code that is _very_ easy to screw up, so it's generally advised not to bother with core.atomic unless you actually _need_ to. shared really doesn't guarantee anything. It just means that you can access that object across threads, which means that you must do all the work to make sure that it's protected from being accessed by multiple threads at the same time. TDPL does say some stuff about shared and memory barriers, but that has never been implemented, and given the performance costs that it would incur, it's highly likely that it will never be implemented. You pretty much have to treat shared like you'd treat any variable in - Jonathan M Davis
Oct 06 2013
parent reply "Nicholas Smith" <nmsmith65 gmail.com> writes:
Thanks Jonathon, these are the kinds of warnings I was looking 
for.

 There are _no_ guarantees of atomicity with shared. Yes, on some
 architectures, writing a word size might be atomic, but the 
 language
 guarantees no such thing.
I was looking narrowly at x86, which I *think* such a statement is safe to say for. But you're absolutely right that I should probably safeguard against the possibility that something could go wrong there.
 either that or
 you have to mess around with core.atomic, which is the kind of 
 code that is
 _very_ easy to screw up, so it's generally advised not to 
 bother with
 core.atomic unless you actually _need_ to.
It will at least ensure sequential consistency, atomic load/store, and atomic operations (via atomicOp), will it not?
 shared really doesn't guarantee anything. It just means that 
 you can access
 that object across threads, which means that you must do all 
 the work to make
 sure that it's protected from being accessed by multiple 
 threads at the same
 time.
Fair enough. It will force the compiler to write to memory immediately though, rather than keep shared values sitting in registers, correct?
Oct 06 2013
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, October 07, 2013 07:26:02 Nicholas Smith wrote:
 Thanks Jonathon, these are the kinds of warnings I was looking
 for.
 
 There are _no_ guarantees of atomicity with shared. Yes, on some
 architectures, writing a word size might be atomic, but the
 language
 guarantees no such thing.
I was looking narrowly at x86, which I *think* such a statement is safe to say for. But you're absolutely right that I should probably safeguard against the possibility that something could go wrong there.
 either that or
 you have to mess around with core.atomic, which is the kind of
 code that is
 _very_ easy to screw up, so it's generally advised not to
 bother with
 core.atomic unless you actually _need_ to.
It will at least ensure sequential consistency, atomic load/store, and atomic operations (via atomicOp), will it not?
You can do atomic operatios via core.atomic, but they're easy to use incorrectly, and I'm not very familiar with them, so I couldn't tell you much more than whatever the docs say.
 shared really doesn't guarantee anything. It just means that
 you can access
 that object across threads, which means that you must do all
 the work to make
 sure that it's protected from being accessed by multiple
 threads at the same
 time.
Fair enough. It will force the compiler to write to memory immediately though, rather than keep shared values sitting in registers, correct?
I don't believe that shared makes any more guarantees in that regard than happens with thread-local variables. All shared really does is make it so that you can access the variable from multiple threads instead of it being thread- local and make it so that the compiler won't make optimizations based on the assumption that only one thread can access the variable. If you use mutexes or sychronized blocks to protect all access of shared variables, you shouldn't have to worry about any threading issues. If you don't, then you're almost certainly going to run into trouble. core.atomic is an alternative, but again, you have to use it correctly, and it's easy to screw up. - Jonathan M Davis
Oct 06 2013
prev sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 7 October 2013 at 05:26:10 UTC, Nicholas Smith wrote:
 Thanks Jonathon, these are the kinds of warnings I was looking 
 for.

 There are _no_ guarantees of atomicity with shared. Yes, on 
 some
 architectures, writing a word size might be atomic, but the 
 language
 guarantees no such thing.
I was looking narrowly at x86, which I *think* such a statement is safe to say for. But you're absolutely right that I should probably safeguard against the possibility that something could go wrong there.
 either that or
 you have to mess around with core.atomic, which is the kind of 
 code that is
 _very_ easy to screw up, so it's generally advised not to 
 bother with
 core.atomic unless you actually _need_ to.
It will at least ensure sequential consistency, atomic load/store, and atomic operations (via atomicOp), will it not?
It will ensure whatever you tell it to. It's a set of convenience wrappers around some assembly instructions.
Oct 07 2013