Friday, April 15, 2005

The moral equivalent of a mutex

In yesterday's post I used the phrase "The moral equivalent of a mutex." I claimed that it was not possible to write code that shares data between threads safely without one.

This prompted an anonymous response which cited Dekker's algorithm as an example of a software-only synchronization mechanism. I appreciate the response (even though I immediately rebutted it) because it prompted a train of thought about what the "moral equivalent..." is and why multithreaded code is so falupin' hard.

Mutex equivalents on Win32 include: CriticalSection, Event, Mutex, Semaphore, InterlockedIncrement, InterlockedDecrement, InterlockedExchange, and so on... Other OS's support some of these and have their own, unique variants with various degrees of arcanity (SYSV Semaphores, for example.) The point is that all of these objects are designed specifically to address thread synchronization.

Dekker's algorithm is interesting because it is an algorithm for implementing a critical section. I'd count it as the moral equivalent... with one caveat. It doesn't work unless there is an underlying hardware synchronization mechanism.

The algorithm contains the following code:
 
flags[i] = BUSY;
while(flags[j] == BUSY)
<SNIP>
<if you get here you have access to the resource>


The problem shows up in the following sequence of events:

Thread 0: flags[0] = BUSY;
Thread 0: while(flags[1] == BUSY) // false so thread 0 has access
Thread 1: flags[1] = BUSY;
Thread 1: while(flags[0] == BUSY) // flags[0] from cache is still FREE
// so the condition is false and thread 1
// also has access to the resource


I'm not saying that Dekker's algorithm is wrong. I'm saying that it contains an underlying and invisible assumption about how things are executed in the computer. In particular it assumes that operations on shared memory are atomic and immediately visible to other threads. If that assumption is correct then the algorithm works. Thus the algorithm reduces the problem of providing CriticalSection behavior to the problem of implementing the shared property.

* * *

A programmer reading code, has a mental model of how the machine works. Most of the time we use a very simple model -- things in our mental model happen sequentially in the order that they appear in the source code we are reading. Having this simple model is A Good Thing[TM] because it allows us to concentrate on what the program is supposed to be achieving rather than how it goes about achieving it.

The problem with this simple model is performance. The code may say:

for(int i = 0; i < 10; ++i)
{
someFunction(i * k);
}


but the compiler may generate code that looks more like:


int j = 0;
do
{
someFunction(j);
j += 10;
} while (j < 100);


on many processors a literal translation of the second version will be faster than a literal translation of the first version -- so the language standards committees have given compiler writers freedom to provide the second version as a legal compilation of the first code.

If you observe the state of the system before this code executes, and after it completes, you can't tell the difference between the two versions. The only observable difference is that one version runs a bit faster.

The programmer gets to write code in a way that describes the algorithm most clearly (in his mind, anyway), and the processor gets to execute code that generates the desired result faster. Everybody is happy.

* * *

Multithreading changes the rules. Rather than observing the before and after states of the system, you now have to be concerned about every intermediate state that might be visible to another thread. A lot of discussions of multithreading present C source code and discuss the implications of an interruption occurring between each statement. The discussion of the incorrect algorithms that precedes the presentation of Dekker's algorithm uses this technique to identify the points of failure. This is a step in the right direction, but it's still not good enough.

Consider the following statement:

volatile i;
a[i] = b[i] + c[i];

and what happens if "i" is potentially changeable by an outside agency (another thread, a memory mapped I/O, etc.) For example, suppose that before executing this statement i has the value 0, but sometime during the execution of the statement i takes on a value of 1. How many possible outcomes are there for this statement?

The answer surprises many people. There are 8 possible outcomes because the compiler is free to evaluate the three instances of i in any order it chooses to. To analyze an algorithm containing the above statement in a multithreaded environment you must consider all eight of these cases.

So all we need to do is break each statement down into phrases that can occur in arbitrary order and analyze the effect of an interrupt between any two phrases. Are we there yet?

Well, it depends on how many phrases you see in the following statement:

++j;

Assuming int j;, this probably compiles into a single machine language statement: inc [j] -- hence one phrase, right?

Nope. At this microcode level, this statment says: retrieve the value of j from memory; add one to it; store the new value back into memory location j. That's two phrases (why not three? because "add one to it" is internal to the processor and therefore invisible to other threads.)

So, we've gotten to the microcode level. We must be at the right level of analysis by now.

Sorry, to truly understand you have to throw in instruction pipelining, and cache (remember cache.) Once you take them into account, then you model of what really happens in the machine is complete enough to analyze the thread-safeness of the probram.

Alas, pipelining and caching issues are truly beyond the control of the programmer, so the problem of ensuring thread-safeness appears to be unsolvable.

Except!

Thank goodness there's a way to tell the hardware to switch momentarily from its default anything-for-speed mode into a safe-for-the-silly-programmer mode. Every processor has at least one synchronization operation that does things like flushing and/or updating cache, locking the bus for a read/alter/rewrite cycle, etc. These operations tend to produce a dramatic slow down because they defeat all the work that went into designing a cache and a pipeline, etc to speed things up. The other problem is on many CPU's the hardware guys decided that these operations should be reserved for kernel mode, so enlisting the hardware's help may involve an OS call with the corresponding high-overhead context switch.

In any case, I think this justifies Rule #1: Multithreading is hard.

No comments: