|
|
However using the returned pointer would not be threadsafe unless it's behind the same mutex again. |
|
|
Anyway, are you sure? |
Can you trust volatile? |
Is the code I posted in the first post any better? |
Other than the fact that there's nothing preventing any other function from writing to the pointer |
This is a general propery of C++, and nothing can be done against this. |
|
|
Very experienced multithreaded programmers know that even the Double-Checked Locking pattern, although correct on paper, is not always correct in practice. In certain symmetric multiprocessor environments (the ones featuring the so-called relaxed memory model), the writes are committed to the main memory in bursts, rather than one by one. The bursts occur in increasing order of addresses, not in chronological order. Due to this rearranging of writes, the memory as seen by one processor at a time might look as if the operations are not performed in the correct order by another processor. Concretely, the assignment to pInstance_ performed by a processor might occur before the Singleton object has been fully initialized! Thus, sadly, the Double-Checked Locking pattern is known to be defective for such systems. In conclusion, you should check your compiler documentation before implementing the Double-Checked Locking pattern. (This makes it the Triple-Checked Locking pattern.) Usually the platform offers alternative, nonportable concurrency-solving primitives, such as memory barriers, which ensure ordered access to memory. At least, put a volatile qualifier next to pInstance_. A reasonable compiler should generate correct, nonspeculative code around volatile objects. |
At least, put a volatile qualifier next to pInstance_. A reasonable compiler should generate correct, nonspeculative code around volatile objects. |
Oh, that. Assign to a temporary before assigning to the global. |
As a rule, programmers don’t like to be pushed around by their compilers. Perhaps you are such a programmer. If so, you may be tempted to try to outsmart your compiler by adjusting your source code so that pInstance remains unchanged until after Singleton’s construction is complete. For example, you might try inserting use of a temporary variable [...]. In essence, you’ve just fired the opening salvo in a war of optimization. Your compiler wants to optimize. You don’t want it to, at least not here. But this is not a battle you want to get into. Your foe is wiley and sophisticated, imbued with strategems dreamed up over decades by people who do nothing but think about this kind of thing all day long, day after day, year after year. Unless you write optimizing compilers yourself, they are way ahead of you. In this case, for example, it would be a simple matter for the compiler to apply dependence analysis to determine that temp is an unnecessary variable, hence to eliminate it, thus treating your carefully crafted “unoptimizable” code if it had been written in the traditional DCLP manner. Game over. You lose. If you reach for bigger ammo and try moving temp to a larger scope (say by making it file static), the compiler can still perform the same analysis and come to the same conclusion. Scope, schmope. Game over. You lose. So you call for backup. You declare temp extern and define it in a separate translation unit, thus preventing your compiler from seeing what you are doing. Alas for you, some compilers have the optimizing equivalent of night-vision goggles: they perform interprocedural analysis, discover your ruse with temp, and again they optimize it out of existence. Remember, these are optimizing compilers. They’re supposed to track down unnecessary code and eliminate it. Game over. You lose. So you try to disable inlining by defining a helper function in a different file, thus forcing the compiler to assume that the constructor might throw an exception and therefore delay the assignment to pInstance. Nice try, but some build environments perform link-time inlining followed by more code optimizations [5, 11, 4]. GAME OVER. YOU LOSE. Nothing you do can alter the fundamental problem: you need to be able to specify a constraint on instruction ordering, and your language gives you no way to do it. |
Let Scott Meyers answer this one for me: [...] |
|
|
If the problem is the writes not being commited in time, I don't see how you could have any simple, reliable code that would get around it. Even the "always lock" approach that xorsldjfsld suggested would be suseptible to the same issue, wouldn't it? |
the end of that paper wrote: |
---|
Java 1.5’s volatile [10] has the more restrictive, but simpler, acquire/release semantics: any read of a volatile is guaranteed to occur prior to any memory reference (volatile or not) in the statements that follow, and any write to a volatile is guaranteed to occur after all memory references in the statements preceding it. |