www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Double Checked Locking

reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
I was looking through Jonathan Davis's pull request to remove static
constructors from std.datetime, and I realized that I don't know
whether Double Checked Locking is legal under D's memory model, and
what the requirements for it to work would be.
(if you're not familiar with the term, check out
http://en.wikipedia.org/wiki/Double-checked_locking - it's a useful
but problematic programming pattern that can cause subtle concurrency
bugs)
It seems like it should be legal as long as the variable tested and
initialized is flagged as shared so that the compiler enforces proper
fences, but is this actually true?
Dec 16 2011
next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-12-17 23:10:19 +0000, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 On 12/17/11 5:03 PM, Jonathan M Davis wrote:
 Well, you learn something new every day I guess. I'd never even heard of
 double-checked locking before this. I came up with it on my own in an attempt
 to reduce how much the mutex was used. Is the problem with it that the write
 isn't actually atomic? Wikipedia makes it sound like the problem might be that
 the object might be partially initialized but not fully initialized, which I
 wouldn't have thought possible, since I would have thought that the object
 would be fully initialized and _then_ the reference would be assigned to it.
 And it's my understanding that a pointer assignment like that would be atomic.
 Or is there more going on than that, making it so that the assignment itself
 really isn't atomic?

There so much going on about double-checked locking, it's not even funny. Atomic assignments have the least to do with it. Check this out: http://goo.gl/f0VQG

Shouldn't a properly implemented double-checked locking pattern be part of the standard library? This way people will have a better chance of not screwing up. I think the pattern is common enough to warrant it. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Dec 17 2011
parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-12-18 04:35:08 +0000, Jonathan M Davis <jmdavisProg gmx.com> said:

 On Saturday, December 17, 2011 22:16:38 Michel Fortin wrote:
 Shouldn't a properly implemented double-checked locking pattern be part
 of the standard library? This way people will have a better chance of
 not screwing up. I think the pattern is common enough to warrant it.

Well, from the sounds of it, the basic double-checked locking pattern would work just fine with a shared variable if shared were fully implemented, but since it's not, it doesn't work right now. So, I don't know that we need to do anything other than finish implementing shared.

I meant something higher level so you don't even have to think about double-checked locking. Something like this: AssignOnce!(shared MyClass) c; c = { new shared MyClass }; // the delegate literal is called only the first time c.blahblah(); // c is guarantied to be initialized at this point ...or something similar. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Dec 18 2011
parent reply "Manfred_Nowak" <svv1999 hotmail.com> writes:
Regan Heath wrote:
 after reading Andrei's link I'm not so sure..

If Andrei's conclusions are correct, then they are correct even for - the compilers under which an OS is compiled and - the hardware on which the OS runs. I.e. if there is a "you loose" for the application programmer then there is a "you loose" for the OS programmer and the engineer of the hardware too. Does anyone except Andrei see what this means? -manfred
Dec 19 2011
parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Sean Kelly wrote:

 to ensure a correct result.

According to Andrei's paper, there is no ensurance of a correct result possible. If optimizations reduce runtime, then not only MTBF will be reduced, but some problems might turn out to be incomputable on that "optimized machine". Nobody will know in advance which problems that are. Of course: this holds only if Andrei's conclusions in his paper are correct. -manfred
Jan 04 2012
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/4/12 6:55 PM, Manfred Nowak wrote:
 Sean Kelly wrote:

 to ensure a correct result.

According to Andrei's paper, there is no ensurance of a correct result possible. If optimizations reduce runtime, then not only MTBF will be reduced, but some problems might turn out to be incomputable on that "optimized machine". Nobody will know in advance which problems that are. Of course: this holds only if Andrei's conclusions in his paper are correct. -manfred

The conclusions only apply to portable C++2003 code. Andrei
Jan 04 2012
parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Andrei Alexandrescu wrote:

 only apply to portable C++2003 code

... and all 1) products derived from such code, 2) portable code in general, if similar argument chains can be built up. -manfred
Jan 04 2012
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/4/12 7:41 PM, Manfred Nowak wrote:
 Andrei Alexandrescu wrote:

 only apply to portable C++2003 code

.... and all 1) products derived from such code,

Yes.
 2) portable code in general, if similar argument chains can be
     built up.

No, because a language's specification can be refined to specify ordering in the presence of multiple threads. Andrei
Jan 04 2012
parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Andrei Alexandrescu wrote:

 specification can be refined

1) I do not see such refinements for D. 2) If there is _no_ requirement for such refinements to be used "manually": which theory enables the compiler to detect the prerequisites for the refinements automatically? 3) If there _is_ a requirement for such refinements to be used "manually", then there is no such theory and your starting "No" turns into a "Yes, of course, even if the specification _is_ refined, because humans might fail". -manfred
Jan 05 2012
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/5/12 4:16 AM, Manfred Nowak wrote:
 Andrei Alexandrescu wrote:

 specification can be refined

1) I do not see such refinements for D.

The shared qualifier is meant to guarantee sequential consistency. This is not yet implemented.
 2) If there is _no_ requirement for such refinements to be used
 "manually": which theory enables the compiler to detect the
 prerequisites for the refinements automatically?

If I understand this correctly, you're asking whether races can be eliminated during compilation. Yes, it is possible, see a line of research started by Boyapati: http://eecs.umich.edu/~bchandra/publications/oopsla01.pdf. D is also designed to eliminate low-level races (in safe programs) during compilation because there is no undue aliasing. High-level races are not eliminated.
 3) If there _is_ a requirement for such refinements to be used
 "manually", then there is no such theory and your starting "No" turns
 into a "Yes, of course, even if the specification _is_ refined, because
 humans might fail".

I don't understand what this are trying to convey. Andrei
Jan 05 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, December 17, 2011 22:16:38 Michel Fortin wrote:
 On 2011-12-17 23:10:19 +0000, Andrei Alexandrescu
 
 <SeeWebsiteForEmail erdani.org> said:
 On 12/17/11 5:03 PM, Jonathan M Davis wrote:
 Well, you learn something new every day I guess. I'd never even heard
 of
 double-checked locking before this. I came up with it on my own in an
 attempt to reduce how much the mutex was used. Is the problem with it
 that the write isn't actually atomic? Wikipedia makes it sound like
 the problem might be that the object might be partially initialized
 but not fully initialized, which I wouldn't have thought possible,
 since I would have thought that the object would be fully initialized
 and _then_ the reference would be assigned to it. And it's my
 understanding that a pointer assignment like that would be atomic. Or
 is there more going on than that, making it so that the assignment
 itself really isn't atomic?

There so much going on about double-checked locking, it's not even funny. Atomic assignments have the least to do with it. Check this out: http://goo.gl/f0VQG

Shouldn't a properly implemented double-checked locking pattern be part of the standard library? This way people will have a better chance of not screwing up. I think the pattern is common enough to warrant it.

Well, from the sounds of it, the basic double-checked locking pattern would work just fine with a shared variable if shared were fully implemented, but since it's not, it doesn't work right now. So, I don't know that we need to do anything other than finish implementing shared. - Jonathan M Davis
Dec 17 2011
prev sibling next sibling parent "Regan Heath" <regan netmail.co.nz> writes:
On Sun, 18 Dec 2011 11:55:10 -0000, Michel Fortin  
<michel.fortin michelf.com> wrote:

 On 2011-12-18 04:35:08 +0000, Jonathan M Davis <jmdavisProg gmx.com>  
 said:

 On Saturday, December 17, 2011 22:16:38 Michel Fortin wrote:
 Shouldn't a properly implemented double-checked locking pattern be part
 of the standard library? This way people will have a better chance of
 not screwing up. I think the pattern is common enough to warrant it.

would work just fine with a shared variable if shared were fully implemented, but since it's not, it doesn't work right now. So, I don't know that we need to do anything other than finish implementing shared.

I meant something higher level so you don't even have to think about double-checked locking. Something like this: AssignOnce!(shared MyClass) c; c = { new shared MyClass }; // the delegate literal is called only the first time c.blahblah(); // c is guarantied to be initialized at this point ...or something similar.

Something like.. http://linux.die.net/man/3/pthread_once I wrote a windows version of that once (pun intended), and I *think* it works .. after reading Andrei's link I'm not so sure.. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Dec 19 2011
prev sibling next sibling parent deadalnix <deadalnix gmail.com> writes:
Le 17/12/2011 08:47, Andrew Wiley a écrit :
 I was looking through Jonathan Davis's pull request to remove static
 constructors from std.datetime, and I realized that I don't know
 whether Double Checked Locking is legal under D's memory model, and
 what the requirements for it to work would be.
 (if you're not familiar with the term, check out
 http://en.wikipedia.org/wiki/Double-checked_locking - it's a useful
 but problematic programming pattern that can cause subtle concurrency
 bugs)
 It seems like it should be legal as long as the variable tested and
 initialized is flagged as shared so that the compiler enforces proper
 fences, but is this actually true?

Well according to the spec, it should work with the garantee given by shared. But in real life, the compiler is unable to ensure that. This is compiler issue.
Dec 27 2011
prev sibling next sibling parent Sean Kelly <sean invisibleduck.org> writes:
On Dec 19, 2011, at 7:18 AM, Manfred_Nowak wrote:

 Regan Heath wrote:
 after reading Andrei's link I'm not so sure..

If Andrei's conclusions are correct, then they are correct even for - the compilers under which an OS is compiled and - the hardware on which the OS runs. =20 I.e. if there is a "you loose" for the application programmer then =

 a "you loose" for the OS programmer and the engineer of the hardware =

=20
 Does anyone except Andrei see what this means?

Not sure I understand the question, but if you look at the source of the = Linux kernel, for example, it uses explicit memory barriers and other = tricks to ensure a correct result.=
Jan 04 2012
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
This isn't strictly true because it would make implementing a mutex in C imp=
ossible. The more detailed answer is that the C/C++ (pre-0x) don't specify a=
 way to do this, so it's compiler and platform-dependent.=20

Sent from my iPhone

On Jan 4, 2012, at 4:55 PM, Manfred Nowak <svv1999 hotmail.com> wrote:

 Sean Kelly wrote:
=20
 to ensure a correct result.

According to Andrei's paper, there is no ensurance of a correct result possible. If optimizations reduce runtime, then not only MTBF will be reduced, but some problems might turn out to be incomputable on that "optimized machine". Nobody will know in advance which problems that are. =20 =20 Of course: this holds only if Andrei's conclusions in his paper are=20 correct. =20 -manfred

Jan 04 2012