Thursday, December 29, 2005

An observation concerning functional programming in C++

The following code was extracted from a live program.

namespace
{
class count_bytes
{
public:
count_bytes(size_t& total)
: total_(total)
{
}
void operator()(
const Element& element
)
{
total_ += element.bytes();
}
private:
size_t& total_;
};
}
size_t
ElementSet::bytes() const
{
size_t total = 0;
std::for_each(elements_.begin(), elements_.end(), count_bytes(total));
return total;
}

By my count that's 26 lines of code. To be fair I should admit that I tightened it up a bit. The original was well over 30 lines.

Compare that to:

size_t
ElementSet::bytes() const
{
size_t total = 0;
for(size_t i = 0; i < elements_.size; ++i)
{
total += elements_[i].bytes();
}
return total;
}


Ten lines (And yes, you could use an iterator rather than an index.)

Functional Programming Motto: You may have to type a whole lot more, but at least your code will be harder to understand.

Actually functional programming is just fine when you need it. The problem happens when it's the "Wrong Tool For The Job[TM]." There seems to be a lot of that goin' 'round [sigh].

Saturday, December 10, 2005

Thread return value/threadID/ handle/ join/detach vs C++

Two entries ago, I opined that in a C++ program there should be an object that represents every thread. One interesting consequence of this statement is that many of the "features" of the OS-supplied multithreading support become unnecesary and even counterproductive.

For example, on almost all platforms, start_thread (by whatever name) calls a function with a void* argument. That's ok. Amost all thread functions begin life with a cast. However the thread function is expected to return a value: usually an int but maybe a DWORD or even a void * or whatever depending on your platform. In C++ the proper return value from this function should be 0 -- always! (Actually it should be void, but it seems a shame to disappoint the OS that's eagerly awating the zero. (And besides the compiler won't let me get away with it.))

Why?

Because if there is an object associated with the thread, the thread has a much richer channel through which to return information -- the members of the object.

And speaking of return values, many times no one cares what the thread has to say as it exits. [No death-bed epigrams for you, Thread, you're outta here. ] The joinable vs detached concept in many OSs accomodates this desire on the part of the thread to get the last word in.

However, since the thread now has a whole object available through which to return values, and since anyone who cares can keep a smart pointer to the object (remember the purpose of smart pointers is to manage object lifetimes.) the whole joinable vs. detached issue becomes moot.

Threads in C++ should ALWAYS run detached. Rather than joining a thread, you can wait on a condition in the thread's object (safely because you have a smart ptr to guarantee the condition will be around to be waited on.)

This approach might be much kinder to your system resources. Many OS's hang on to lots of information about a terminated thread -- possibly even it's entire stack, register safe storage, open FDs, etc. waiting for someone to join in and tell them the resources can be freed. Using an object can be a considerable savings.

Which brings up the issue of thread ID's If your interaction with the thread is via the object, you don't need a thread ID to join the thread -- and you certainly don't need a thread ID to kill the thread (see blog entry n-1) so the thread ID becomes much less valuable. It still has some value in identifying the thread in log messages (Ever tried to follow a log that didn't include thread ID's in a message? Me too, and I still regret it.) And the thread ID might also be involved in managing thread specific storage -- although many uses of TSS could be handled better by storing the data (or pointers thereto) in the thread's object.

Speaking of TSS. Think of it as FORTRAN COMMON for the thread-wielding-crowd. There's usually a better way, but sometimes the better way requires some thought <insert cynical comment here.>

Foot shooting

In my previous post I criticized ACE for allowing the programmer to shoot himself in the foot. I was wrong. The problem is not that ACE allows you to do dangerous things. The problem is it doesn't provide enough incentive to convince a lot of programmers NOT to do the dangerous things.

If I work really hard I can imagine a situation in which the only way to save my life would be to shoot myself in the foot. Witness the hiker a couple of years ago who cut off his trapped arm so he could hike out of the wilderness with the rest of his body parts still functioning. That doesn't mean that everyone who ventures into the wilderness should pack an amputation kit just in case, or if the do, it should be in a package that is clearly labled: "For emergency use only." Once you open the package, you should find another package that says, "No, this is not an emergency, do it the right way." Only after opening THAT package should you find the Acme self-amputation, foot-targeting, and dangerous OS functions kit. [Pat. Pending]

So, if you want thread A to kill thread B, all you need to do is call the:
I_AM_A_BLINKING_IDIOT_FOR_USING_THIS::kill_thread() method.

Monday, December 05, 2005

Practical Threading

A while ago, I started to talk about multithreaded programs in this blog. Alas, I distracted myself into talking about "Why Thread" -- an important topic and one that is often misunderstood-- when I should have been talking about "How to thread" because "How" is done badly even more often than "Why".

My recent work with multithreading has been with ACE and boost threads, so I'll use them as examples (no offense, guys.)

So, "How to thread in C++"

C++ is an object oriented language (or at least it can be used to write object oriented programs which is not quite the same thing but is close enough for now.) An object oriented should have an object representing/corresponding to each entity the program is dealing with.

A thread is an entity that needs to be dealt with by a multi-threaded program.

Rule #1: There should be a one-to-one relationship between threads and thread-related objects in a C++ program.

This does not mean an application programmer (a programmer who is dealing with objects that represent "real-world" entities) should be thinking about thread objects. On the contrary the thread objects should be hidden so well that they do their job without distracting the application programmer from the real work to be done.

"But wait!" you say, "isn't that a lot of overhead? Expecially," you add -- looking ahead in this entry -- "when you have to allocate the object on the heap."

"Don't be ridiculous." I calmly reply. "You're planning to start a thread with it's own stack, register storage, and who knows what resources tied up in the OS and you worried about a simple malloc!"

But I digress (so what else is new)

Where was I? Oh, yeah. "There should be a one-to-one relationship between threads an thread-related objects." But many thread-support libraries (ACE) don't always do this. Instead they have objects like a thread group that represents some number of threads. As soon as you do this, you lose control of individual threads and that's a problem.

I'm not saying that there shouldn't be objects like thread pools, just that a thread pool should never interact directly with an OS thread. Instead it should interact with the C++ object that represents the thread -- of which there will be as many as there are threads.

Oh, didn't I mention rule 2: All interaction with a thread should be through its object. I guess that seems obvious to me, but again it's often not obvious enough to the authors of thread libraries [ACE] that they actually do it that way (sigh). I guess they think the programmers would object to having a gun that wouldn't fire when pointed directly at your foot (or more vital parts of your anatomy.)

Ok, its time for rule three: The lifetime of the thread-related object must be longer than the lifetime of the thread. It should exist (however briefly) before the thread is started, and should continue to exist (however briefly) after the thread exits.

Again, this is something that many existing libraries (ACE, boost, et. al.) get wrong.

Oh, dear. We've entered the hazardous realm of object lifetime management [sometimes misrepresented as an issue of object ownership, but thinking in terms of ownership muddles the issue.]

Fortunately object lifetime management is an area in which the *SILVER* *BULLET* solution has emerged -- reference counted pointers! Unfortunately, C++ does not provide the tools to do reference counted pointers well, but it is possible to come close. boost::shared_ptr and ACE_Strong_Ptr are examples of refcount pointers done pretty darn good if not perfectly.
[ACE_Refcounted_Autoptr on the other hand is a disaster waiting to happen -- please don't use it (at least not in an airplane I might fly in!)]

So, we're going to use boost::shared_ptr to manage the lifetime of the ThreadRelatedObject. Cool!

class ThreadRelatedObject;
// alias TRO

typedef boost::shared_ptr ThreadRelatedObjectPtr;
// alias TROPtr


That leads to the next rule (4 I think): The TRO must be the one to start the thread (deftly satisfying half of rule three by guaranteeing that the TRO exists before the thread does.) The other half of rule three is handled by rule 5: The TRO must have its own, private, TROPtr so that it can be involved in its own lifetime management. We'll call this the self pointer. The last thing the thread does before exiting will be to reset it's self pointer -- allowing the TRO to be deleted if no one else remembers it. [So if you're thinking object ownership you might not understand why an object needs to own itself, but it seems perfectly reasonable to think that an object might want to manage its own lifetime.]

And then of course, there's rule 6: Any object outside the TRO that wishes to interact with the thread must do so via a TROPtr. Otherwise it can't guarantee that the TRO still exists.

A point of information: boost::shared_ptr to the same object must touch each other. I.e.
Widget * w = new Widget;
WidgetPtr p1(w);
WidgetPtr p2(w);
and you have a disaster because p1 didn't touch p2.
No one would code the above, but they might code:
WidgetPtr p3(new Widget);
which looks perfectly reasonable, and in fact is the preferred technique up 'till the point that the Widget does
class Widget
{
Widget()
:self_(this)
{
}
WidgetPtr self_;
};
Kaboom.

Anyway, it's time for a very-import-point-that-*everybody*-gets-wrong. Rule 7: The TRO must not -- repeat must not -- start the thread in the constructor. Why?
Suppose the constructor:
1) creates the self pointer
2) starts the thread
3) returns the self pointer to the creator of the TRO (oops, see above!)

Nevermind that point 3 doesn't work because the constructor can't return anything other than this which is not a ThisPtr -- that's just a deficiency in the C+++ language, and there are ways[hack] around that problem -- like private constructors with static create() methods [hack] and my favorite: passing to the constructor a reference to a TROPtr which the constructor initializes[hack].

The real reason for rule 7 shows up somewhere between step 2 and step 3 of the constructor when the thread started by step 2, does what needs to be done, and exits(!) before step 3 is executed. The static create method can't handle this. The pass-a-ref-to-ptr-to-the-constructor hack mentioned above, can cope if done very carefully (a very careful hack, eh?) but it gets ugly --especially when layers of inheritance happen. Actually its kind of fun to get it wrong, then watch one of your coworkers try to figure out why the pointer returned from new points to an object that's already been deleted -- but only if you have a coworker who deserves it . To bad at this job I don't have any candidates.

Why do ugly when there is a simple solution.

Rule 7a: There should be a start method on the TRO that actually starts the thread.

One of the fun things about programming that when you find the right solution, stuff works! Separating object construction from thread initiation is one of those right solutions. The calling object gets the luxury continuing to initialize the TRO after creating it and before starting it. Some things are best not done in a constructor.

Separating construction from thread initiation also make it considerably easier to create a generic base class for handling thread related issues. The ugly create and or pass-a-pointer hacks aren't necessary (go ahead, try to figure out how to do a static create method in a base class). This means that we can get the solution right once and never worry about it again.

So that's what I did.

My solution that's actually being used is based on ACE thread support (ACE does provide good platform independent thread support [as long as you don't use Thread Specific Storage (grin) I just don't like the way its packaged.) Unfortunatly ACE tends to be a bit intrusive -- it's a shame to take on all that baggage just to get a few platform-neutral thread-related functions.

That's why I went looking at boost threads. Most of boost is truly high-class work. Boost threads, alas, is not. Although it passes the "use conditions rather than events" test [an altogether different topic.], it fails the object-lifetime management test.

So I guess I won't be publishing my "universal thread support the right way" base class quite yet. Maybe I'll just publish the ACE-based version and someone will point me to a platform independent thread library that separates object construction from thread initiation and supports condition rather then event, and does not come with tons of baggage.

Just remember, the multicores are coming. Do you know what your threads are doing?

Thursday, October 20, 2005

Strangelove

I bought a copy of Dr. Strangelove last weekend. I hadn't seen it in years, so I had the joy, once again, of discovering all the the little gems buried in the movie.

"You can't fight here! This is a War Room!"

To really appreciate the movie, you have to understand it in the context of the early '60s with the commie scare, the bomb scare, and, yes, the floridated water scare.

So where is the moviemaker today who'll do a comedy about terrorism, Al Queda, homeland security -- with a side jab or two at intellegent design? Yes, we really do need to laugh while we watch the World Trade Centers burn -- otherwise the terrorists win.

Remember at the end of Strangelove the doomsday device was triggered while George C. Scott was warning the president about the "mine shaft gap."


Dale

Thursday, October 13, 2005

We've come a long way...

I just stumbled over this code deep down in ACE -- a C++ library/framework that prides itself on its portability:

ACE_OS::sprintf (date_and_time,
ACE_LIB_TEXT ("%3s %3s %2d %04d %02d:%02d:%02d.%06d"),
day_of_week_name[local.wDayOfWeek],
month_name[local.wMonth - 1],
(int) local.wDay,
(int) local.wYear,
(int) local.wHour,
(int) local.wMinute,
(int) local.wSecond,
(int) (local.wMilliseconds * 1000));
return &date_and_time[15 + (return_pointer_to_first_digit != 0)];

A word of explanation. For a long time the ACE community didn't believe in bool, so "return_pointer_to_first_digit" is a bool-like substance that when equal to zero means false. Thus (return_pointer_to_first_digit != 0) converts the pseudobool to a genuine bool.

Question: What value does *your* favorite C++ compiler use to represent true?

Dale

Monday, August 29, 2005

Coming soon to a cell phone near you...

Just a reminder: In a month, cell phone numbers are being released to
telemarketing companies and you will start to receive sale calls.
YOU WILL BE CHARGED FOR THESE CALLS
To prevent this, call the following number from your cell phone:
888/382-1222. It is the National DO NOT CALL list. It will only take a
minute of your time. It blocks your number for five (5) years.

You can also use the following web link:

https://www.donotcall.gov/default.aspx

Friday, August 26, 2005

The opposite of in...

Quick, what's the opposite of login?

The answer, of course, is logout.

And the opposite of logon?

Logoff!

So why do some systems want you to login, then logoff; while others prefer that you logon and logout?

I must be developing another pet peeve.

Friday, August 19, 2005

Why I worry about Ruby

In the FAQ on the official Ruby site, Matz (author of Ruby) is quoted as saying:

Well, Ruby was born on February 24 1993. I was talking with my colleague about the possibility of an object-oriented scripting language. I knew Perl (Perl4, not Perl5), but I didn’t like it really, because it had smell of toy language (it still has). The object-oriented scripting language seemed very promising.

I knew Python then. But I didn’t like it, because I didn’t think it was a true object-oriented language—OO features appeared to be add-on to the language. As a language manic and OO fan for 15 years, I really wanted a genuine object-oriented, easy-to-use scripting language. I looked for, but couldn’t find one.

Let's see: In 1993 he had been an OO fan for fifteen years. He must have been using Simula in 1978. I'll give him the benefit of the doubt on that one, but...

Python is not object-oriented enough? OO features tacked on?

Apparently Matz doesn't quite get it. In Python *everything* is an object Witness the following interactive session:

>>> type(1)
<type 'int'>
>>> dir(1)

['__abs__', '__add__', '__and__', '__class__', '__cmp__', '__coerce__', '__delattr__', '__div__', '__divmod__', '__doc__', '__float__', '__floordiv__', '__getattribute__', '__getnewargs__', '__hash__', '__hex__', '__init__', '__int__', '__invert__', '__long__', '__lshift__', '__mod__', '__mul__', '__neg__', '__new__', '__nonzero__', '__oct__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__str__', '__sub__', '__truediv__', '__xor__']
As you can see an integer, like all the other native data types, is an object with a value, a type, methods -- the whole shebag.

Not only that, but!

>>> def spam():
... """This is the spam function"""
... print "Peanut butter and Spam sandwich"
...
>>> spam()
Peanut butter and Spam sandwich
>>> type(spam)
<type function >
>>> dir(spam)
['__call__', '__class__', '__delattr__', '__dict__', '__doc__', '__get__',
'__getattribute__', '__hash__', '__init__', '__module__', '__name__', '__new__', '__reduce__', '__reduce_ex__',
'__repr__', '__setattr__', '__str__', 'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc',
'func_globals', 'func_name']


A function is an object. It can be manipulated just like any other object in Python. It just happens to support the __call__ method. Yes, of course you can create your own class of object that supports the __call__ method and use it anywhere a "normal" function is expected.

Other types objects in Python include modules (in Java they're called packages); chunks of compiled code (that's the func_code property of the method above); "None"; "NotImplemented"; and "Ellipsis"; and more, all of which are available to be manipulated by the programmer as objects (if your into that kind of thing), or to just quietly do their job if you'd rather concentrate on the important stuff.

Of course, Python supports user defined classes from which instances (i.e. objects) can be created. Like the rest of Python, class defintions are syntactically and conceptually clean. And yes, the class itself is just as much an object as its instantiations are.

About the only OO feature I can think of that Python doesn't support is function overloading -- it's kind of hard to do in a dynamically typed language (grin).

Back in '93 Python was still a bit young (it was originally released in '91) but even then it was obvious that "Guido knows Objects."

I guess Matz was mislead because Python's OO nature is not constantly in-your-face. You can code in Python without thinking about objects (unless, of course, you want to). Instead you get to think about the problem you're trying to solve. Hello world in Python is:
print "Hello, World!"
The objects are there doing their job, so you don't have to worry about them.

Maybe Matz did a better job of designing Ruby than he did of understanding Python.

Tuesday, June 21, 2005

Simulating a loom. UI vs creativity.

I bought another weaving design program last week at the Midwest Weavers' Conference. Both Tina and I use our"old" program regularly to design cloth. Why would I pay $100 for a new program when I have a perfectly good program that obviously works? The answer says something about the impact of user interface design on creativity.

Since I can't assume that everyone who reads this understands how a loom works, I have to digress. I'm going to describe the design issues faced by a weaver using a jack loom. There are many other types of looms that have their own design issues, but jack looms are very common among handweavers so it's a good place to start.

The purpose of a loom is to interleave two sets of threads that run at right angles to each other -- thereby creating cloth. (All you people with triangular looms, hush -- I'm trying to keep this simple.) One set of threads, the warp, is installed on the loom before the actual weaving begins in a process known as dressing the loom. The second set, the weft, is added to the cloth one thread at a time by running a shuttle containing a bobbin full of weft thread between the warp threads in a preplanned pattern. (and I'm not even going to *mention* how many details and variations I just omitted.)

When you ask a weaver to describe his or her loom, you can be sure that one of the first things they mention is how many shafts the loom has (unless, of course, they've been weaving for a long time in which case they'll tell you how many harnesses the loom has. I'm sure there's a really good reason for the terminology change -- other than to confuse the innocent.) That's because the number of harnesses (oops, I mean shafts) has a strong influence on the complexity of the cloth that can be produced. So what's a shaft?

Part of dressing the loom is threading. Each of several hundred threads in the warp goes through through the eye of a heddle (Imagine a large (12" long) needle with the eye in the middle rather than near one end.) The heddle is attached to a shaft, so that when you lift the shaft, all of the heddles attached to that shaft, and therefore all the threads in the eyes of those heddles are lifted. The the remaining warp threads -- the ones that are attached to shafts that do not get lifted -- remain down and a triangular space is opened up between the two sets of threads. This space, called a shed, is where the shuttle is thrown -- trailing its warp thread behind it.

Once the shuttle is through the shed, the shed is closed, the warp thread is pressed into place at the edge of the newly woven cloth using part of the loom called the beater, and a different shed is opened for the next warp thread. Thus each warp goes under the set of lifted warp threads, and over the remaining ones and cloth happens.

Since each warp thread is associated with a single shaft, the warp is divided into independantly controllable sets of threads. The number of shafts on the loom defines an upper limit on the number of sets of warp threads. More than one shaft can be lifted to produce any particular shed so the number of potential sheds (aka lifts) goes up dramatically as the number of shafts increases. In fact, a loom that has n shafts can produce 2**n - 2 meaningful lifts. (The -2 is there because it doesn't make sense to lift 0, or n shafts.) Some of the common cases are:
2 shafts -> 2 lifts
4 shafts -> 14 lifts
8 shafts -> 254 lifts.
16 shafts -> 65534 lifts.

Thus motivating a common malady among weavers: shaft envy [ no questionable jokes allowed here.] and it's converse: shaft pride [note 1].

There is another limitation, however, that shows up when I describe a previously unmentioned part of the loom -- the treadles. In order to lift the shafts, the weaver presses down a foot treadle. Each treadle is tied to one or more shafts so that the shafts are lifted as the treadle is pressed. The number of treadles imposes an additional upper bound on the number of lifts. For example, most 8 shaft looms have 10 treadles, so part of the design process is select which of the 254 possible lifts will be used during the weaving process. Of course it is possible to press more than one treadle at the same time (two feet can produce 100 possible lifts on a 10 treadle loom), and of course the tie up between treadles and shafts can be changed during the weaving process, but that's a slow and awkward proposition. Most handweavers using treadle-operated looms end up restricting the number of distinct lifts to the number of treadles.

...unless....

Unless the loom has a dobby instead of treadles and a tie-up. For computer history buffs, dobbies are the thing that Joseph Jacquard invented that led via Herman Hollerith to punched cards (which no one under 30 remembers, anyway.) A mechanical dobby uses holes punched in a wooden board, or more commonly nowdays pegs screwed into a wooden board to indicate which shafts should be lifted to form a shed. These cards are chained together so when the weaver is ready to move to the next weft thred, the chain is advanced to the card containing next lift pattern.[note 2]

A dobby provides two benefits. First the number of possible sheds is no longer limited by the number of treadles -- the weaver can design to the full capability of the loom, and second the weaver no longer has to remember the treadling sequence. No longer is the complexity of the pattern limited by the capacity of the weaver's memory, or the speed of weaving limited by the need to carefully follow a treadling sequence.

An electronic dobby takes this one step further. Rather than pegs in a wooden card to select a lift pattern, an electronic dobby uses solenoids to select the shafts to be lifted. These solenoids can be computer activated, so the chain of dobby cards can be relplaced with a lift plan stored in the computer. This removes yet another limitation in that the length of a woven pattern is no longer limited by the number of dobby cards in a chain. Instead it is limited only by the capacity of the computer and the ability of the weaver to design the pattern. Suddenly those 65 thousand possible lift patterns are accessable -- if only the weaver can figure out how to actually use them.

Which brings us back, finally, to the issues of user interface design for the computer assisted design programs used by weavers -- a topic for the next entry since this has gotten way to long.

[note 1] Tina and I have looms with 4, 8, 16, and 24 shafts. The 16 and 24 shaft looms are computer controlled.

[note 2] A dobby controlled jack loom described here is not the same as a modern Jaquard loom. A Jaquard loom provides individual control of each thread. It could be (but isn't) described as a loom with several hundred shafts. Jaquard looms typically cost 10 to 100 times as much as dobby controlled jack looms, and wouldn't fit in a handweaver's studio anyway.

Thursday, May 19, 2005

When worlds collide

I love it when two separate parts of my life collide. Ruth Blau just posted this on the WeaveTech List

This past weekend, I attended a wonderful knitting workshop with Debbie New. One of her approaches to designing is based on the Sierpinksy triangle fractal. You can do it with color (either two colors, or better yet, with dark & light and then use any colors you want), with stitches (knit vs. purl) or w/ broader design concepts, e.g, cables. She calls it "rule-based knitting." At any given time, the stitch (or color or whatever) you use is determined by the stitch's surroundings. Here's an example. Assume you're ready to knit a stitch. Look at the three stitches below it (one directly below and the two on each side of it). Your rule is this: if one (and only one) of the stitches below is purl, then you purl. Otherwise you knit. It could also be this: if one and only one of the stitches below is light then use light yarn, otherwise use dark. You make this decision for every stitch in the row (yes, it's slow going).

If you happen to have Debbie's book "Unexpected Knitting," this is in the section called Cellular Automaton Knitting.

Intel Exudes Confidence

Headline and lead-in from an InformationWeek article

Intel Business PCs Won't Include Dual-Core Processors
The business-PC platform includes only technologies that have been validated, and the chipmaker promises it will remain stable and unchanged for the next 12 months.
By Darrell Dunn

Are you getting a warm fuzzy feeling about the Intel dual core chips?

Monday, May 16, 2005

Pauli Exclusion Principle applied to Airlines

Pauli Exclusion Principle

No two people on any particular airplane shall have paid the same amount for their ticket.

Wednesday, April 27, 2005

Oops. My mistake...

In a previous post on threading I gave a high-level pseudocode description of a multithreaded MPEG decoder.

It was wrong.

A revised (and, I hope, more correct) version is:

The new high level design for video looks like:

  • Accept input, and separate into VOBs
  • Hold VOBs for processing [MT]
  • Pick a VOB and demux its content into substreams
  • Queue packets for decoder(s)[MT]
  • Decode stream into buffer in decoded VOB
  • Wait for VOB completion [MT]
  • Hold completely decoded VOBs[MT]
  • Get next VOB and deliver decoded substreams to presentation engine.
  • Hold decoded, substreams for presentation [MT]
  • Mix and present decoded content.

* * *
My first instinct was to simply edit the original post and replace the incorrect "code" with the corrected version.

Then I remembered the programmer's diary, I mentioned in another post.

One of the hardest parts of becomming a good programmer is learning how to deal with mistakes. First you have to accept that you and the people you work with are going to make mistakes. Then you have to train yourself to react positively to your own and other peoples' mistakes.

Reacting positively to your own mistakes means you fix your mistakes. You don't hide them. You don't defend them. You just fix them -- and clean up any consequences resulting from the mistake.

Reacting positively to other peoples' mistakes means you bring them to their attention in a non-threatening way. You don't fix their mistakes for them (at least not silently.) You don't help them hide their mistakes. You don't gloat over their mistakes (although it's hard to avoid a certain level of "boy, I'm glad I didn't make that mistake.) What's important is that the person who makes the mistake learns that it happened, and that the mistake gets fixed.

And finally, when someone brings one of your own mistakes to your attention, the only proper response is "Thank you." After saying that then you can proceed to analyze the report to see if it's correct, but first you must reward the person who respected you enough to tell you about your (possible) mistake.

A lot of this comes from another landmark book about software development: The Psychology of Computer Programming, by Gerald Weinberg.

* * *
I predict that we will never have a good programmer as president of the United States (and vice versa.)

* * *
So why did I make this mistake? Because I was thinking about multithreading on a frame-by-frame basis. Then when I switched to thinking about it on a VOB-by-VOB basis I didn't completely reset my mental model of the problem.

How can I avoid making this kind of mistake in the future? (Or how can I make it less likely to happen?) Tough question -- maybe awarness of the potential pitfall will help.

Thursday, April 21, 2005

Baby Geese and Parrots

The eggs in the goose nest right outside the door to our building here hatched the other day. Within hours after they hatched, they were cute bundles of fuzzy yellow feathers running around on their own -- much to their mother's dismay. A day later the parents marched their goslings over to a near-by lake.

How come baby geese are so competent and cute when baby parrots are totally helpless and look like something from a grade C SF flick?

For example

My theory is that baby parrots are too busy growing an intellegent brain to have any energy left over for cute. Geese seem to make-do without benefit of brain.

Tuesday, April 19, 2005

Parallel by force

Suppose you've created the multithreaded MPEG decoder as outlined in the previous entry. Remember the good reason for multithreading the MPEG decoder was:
  • The task is inherently multithreaded so a multithreaded solution results in simpler code.

In fact the MPEG decoder almost begs to be multithreaded.

So one day your multithreaded MPEG decoder is happily zipping thru an MPEG stream that contains just video and one audio track. The following threads are running:

#1 Accept and demux input
#2 Decode video substream
#3 Decode audio substream
#4 Mix and present substreams.

Then your boss shows up and says, "I spent all this money on a 16 CPU superserver and your application is only keeping it 25% busy. I want you to increase the parallellism so all the CPU's will be kept busy. NOW!"

* * *
Now what do you do (other than looking for a new job with a new boss.)

You've already added the "natural" multithreading that is inherent in the problem. How can you increase parallelism even further?

It's time to try to apply the other good reason.

  • The task can be cleanly decomposed into multiple sub-tasks that are highly independent; the independent tasks can use resources in parallel; and the benefits of this parallel usage outweigh the overhead of multithreading. (All three conditions must be true.)

Hmmm....

A video stream is a series frames. Maybe we can create multiple threads and have each thread decode a separate frame. So we add component that separates the stream into a series of, undecoded frames (yes this is fairly easy to do without actually decoding the frames) and a pool of threads that processes these frames. Each thread from the pool picks up the next un-decoded frame, decodes it, and adds the result to a collection of decoded frames. Since frame-decode time varies as a function of the complexity of the image, we also need component to shuffle the decoded frames back into the correct order.

Voila, we can keep as many CPU's busy as we want to by looking forward far enough. Makes sense, right?

Nice theory, anyway. When you start coding the frame decoder, you'll quickly run into a major stumbling block. One of the techniques MPEG uses to compress the video image is to send most frames as a diff from the previous frame. This is very effective -- especially when the movie is showing relatively static scenery (it doesn't work so well during explosions.) Thus as you decode frame #n you regularly have to refer back to frame #n-1 to apply the diff and thereby create the final result. Even more interesting, sometimes you have to look *forward* to frame #n+1! (Don't ask, the MPEG folks are a twisted bunch.)

So the thread-per-frame solution sounds plausable (you can probably sell it to your boss) but fails the "independance" test. Back to the drawing board.

Fortunately for DVDs there's another approach. In order to support fast forward, slow motion, jump to scene, etc, the video on a DVD is carved up into chunks called video objects (VOBs) A VOB contains about half a second worth of video, audio, subtitles, etc. and what's more important each VOB is independant of the VOBs that preceed it and follow it. So, although the thread-per-frame idea was a bust, a thread-per-VOB approach will work nicely. You may need a priority scheme to insure that the thread that's decoding the VOB scheduled to show up next on the screen gets all the resources it needs, but other than that you've found a clean division of the main task into subtasks that can take advantage of the available CPU's by running in parallel.

The new high level design for video looks like:
  1. Accept and demux input
  2. Queue packets for decoder(s)[MT]
  3. Separate into VOBs
  4. Hold VOBs for processing[MT]
  5. Decode VOB
  6. Hold decoded VOBs for reordering[MT]
  7. Reorder decoded VOBs into decode stream
  8. Queue decoded streams for mixer.[MT]
  9. Mix and present substreams.

This approach has added some more synchronization spots -- one to hold the separated VOBs waiting to be decoded, and one to hold the decoded VOBs until they can be placed in the correct sequence and passed on to the mixer. It might be tempting to try to merge demuxer with the VOB separator or the decoded VOB holder with the decoded stream queue, but don't give in to temptation. Solve one problem at a time and let the inherent parallelism take care of improving performance. [or at least get it working correctly and profile it before optimizing.]

The moral of the story:
  • Finding the right decomposition into independant subtasks needs to be done carefully based on detailed understanding of the domain. An obvious solution may not be the right solution.

Monday, April 18, 2005

Multithreading: Why bother?

So multithreading synchronization is hard and requires hardware support. How do all those existing multithreaded programs manage to work?

Answer #1 Someone got lucky. Doesn't it comfort you to know that the software flying your airplane might be working by accident?

Answer #2: To write thread-safe code you have to follow a different set of rules. Actually an additonal set of rules, because all the old rules for writing good programs still apply.

Since single threaded code runs faster, is easier to write, and is easier to test than multithreaded code, why anyone would willingly go to all the effort necessary to write multithreaded code? Good question. The first decision that needs to be made when designing a multithreaded program is, "is this necessary?" If you can't come up with a compelling benefit for multithreading, go for the simple solution.

There are lots of bad reasons for multithreading, and only a couple of good ones. The good reasons I know of:

  1. The task is inherently multithreaded so a multithreaded solution results in simpler code; or
  2. The task can be cleanly decomposed into multiple sub-tasks that are highly independent; the independent tasks can use resources in parallel; and the benefits of this parallel usage outweigh the overhead of multithreading. (All three conditions must be true.)

Let me provide an example of the first case.

MPEG is a standard for encoding audio-video information. A stream of MPEG encoded data can contain many substreams. For example: an MPEG encoded movie recorded on a DVD might contain a single stream of video, two or three streams of video overlay (the subtitles in various languages); several streams of audio (the main audio track in different languages, etc. and the director's comments); and DVD navigation information to support fast forward, fast reverse, etc.

These substreams are multiplexed at a packet level. The overall data stream consists of a set of fixed-sized packets and each packet is part of a a particular substream. You could have a navigation packet, two video packets, and audio packet, another video packet, a subtitle packet, and so on.

The substreams themselves have a rich internal structure. For example the video stream contains sequences of variable bit-length, huffman encoded data fields. Suppose the video stream decoder has extracted the first five bits of an eleven bit field when it hits a packet boundary, it would be a nightmare to attempt to save the video-decoding state including the partially extracted field, and switch to a completely different context in order to be able to properly decode the audio packet that comes next.

Splitting the MPEG decoder into a main demultiplexing thread and independent decoding threads for each substream, and a mixing thread to manage the simultaneous presentation of the decoded threads dramatically simplifies design.

It is interesting to note that there are two synchronization hot-spots in the multithreaded version of the MPEG decoder. One is the point at which the demultiplexer passes a packet is passed to the specific stream decoder for this type of packet, and the other is the point at which the mixer accepts the decoded substreams for integration and presentation. Everything between these two points can and should be coded as if the program were single threaded.


These synchronization hot spots should be separate components. A possible high level design would be:
  • Accept and demux input
  • Queue packets for decoder(s)[MT]
  • Decode substream
  • Queue decoded streams for mixer.[MT]
  • Mix and present substreams.

Multithreading issues should addressed only in the two components marked [MT]. Everything else should be written as if it were single threaded (and protected accordingly.)

Friday, April 15, 2005

The moral equivalent of a mutex

In yesterday's post I used the phrase "The moral equivalent of a mutex." I claimed that it was not possible to write code that shares data between threads safely without one.

This prompted an anonymous response which cited Dekker's algorithm as an example of a software-only synchronization mechanism. I appreciate the response (even though I immediately rebutted it) because it prompted a train of thought about what the "moral equivalent..." is and why multithreaded code is so falupin' hard.

Mutex equivalents on Win32 include: CriticalSection, Event, Mutex, Semaphore, InterlockedIncrement, InterlockedDecrement, InterlockedExchange, and so on... Other OS's support some of these and have their own, unique variants with various degrees of arcanity (SYSV Semaphores, for example.) The point is that all of these objects are designed specifically to address thread synchronization.

Dekker's algorithm is interesting because it is an algorithm for implementing a critical section. I'd count it as the moral equivalent... with one caveat. It doesn't work unless there is an underlying hardware synchronization mechanism.

The algorithm contains the following code:
 
flags[i] = BUSY;
while(flags[j] == BUSY)
<SNIP>
<if you get here you have access to the resource>


The problem shows up in the following sequence of events:

Thread 0: flags[0] = BUSY;
Thread 0: while(flags[1] == BUSY) // false so thread 0 has access
Thread 1: flags[1] = BUSY;
Thread 1: while(flags[0] == BUSY) // flags[0] from cache is still FREE
// so the condition is false and thread 1
// also has access to the resource


I'm not saying that Dekker's algorithm is wrong. I'm saying that it contains an underlying and invisible assumption about how things are executed in the computer. In particular it assumes that operations on shared memory are atomic and immediately visible to other threads. If that assumption is correct then the algorithm works. Thus the algorithm reduces the problem of providing CriticalSection behavior to the problem of implementing the shared property.

* * *

A programmer reading code, has a mental model of how the machine works. Most of the time we use a very simple model -- things in our mental model happen sequentially in the order that they appear in the source code we are reading. Having this simple model is A Good Thing[TM] because it allows us to concentrate on what the program is supposed to be achieving rather than how it goes about achieving it.

The problem with this simple model is performance. The code may say:

for(int i = 0; i < 10; ++i)
{
someFunction(i * k);
}


but the compiler may generate code that looks more like:


int j = 0;
do
{
someFunction(j);
j += 10;
} while (j < 100);


on many processors a literal translation of the second version will be faster than a literal translation of the first version -- so the language standards committees have given compiler writers freedom to provide the second version as a legal compilation of the first code.

If you observe the state of the system before this code executes, and after it completes, you can't tell the difference between the two versions. The only observable difference is that one version runs a bit faster.

The programmer gets to write code in a way that describes the algorithm most clearly (in his mind, anyway), and the processor gets to execute code that generates the desired result faster. Everybody is happy.

* * *

Multithreading changes the rules. Rather than observing the before and after states of the system, you now have to be concerned about every intermediate state that might be visible to another thread. A lot of discussions of multithreading present C source code and discuss the implications of an interruption occurring between each statement. The discussion of the incorrect algorithms that precedes the presentation of Dekker's algorithm uses this technique to identify the points of failure. This is a step in the right direction, but it's still not good enough.

Consider the following statement:

volatile i;
a[i] = b[i] + c[i];

and what happens if "i" is potentially changeable by an outside agency (another thread, a memory mapped I/O, etc.) For example, suppose that before executing this statement i has the value 0, but sometime during the execution of the statement i takes on a value of 1. How many possible outcomes are there for this statement?

The answer surprises many people. There are 8 possible outcomes because the compiler is free to evaluate the three instances of i in any order it chooses to. To analyze an algorithm containing the above statement in a multithreaded environment you must consider all eight of these cases.

So all we need to do is break each statement down into phrases that can occur in arbitrary order and analyze the effect of an interrupt between any two phrases. Are we there yet?

Well, it depends on how many phrases you see in the following statement:

++j;

Assuming int j;, this probably compiles into a single machine language statement: inc [j] -- hence one phrase, right?

Nope. At this microcode level, this statment says: retrieve the value of j from memory; add one to it; store the new value back into memory location j. That's two phrases (why not three? because "add one to it" is internal to the processor and therefore invisible to other threads.)

So, we've gotten to the microcode level. We must be at the right level of analysis by now.

Sorry, to truly understand you have to throw in instruction pipelining, and cache (remember cache.) Once you take them into account, then you model of what really happens in the machine is complete enough to analyze the thread-safeness of the probram.

Alas, pipelining and caching issues are truly beyond the control of the programmer, so the problem of ensuring thread-safeness appears to be unsolvable.

Except!

Thank goodness there's a way to tell the hardware to switch momentarily from its default anything-for-speed mode into a safe-for-the-silly-programmer mode. Every processor has at least one synchronization operation that does things like flushing and/or updating cache, locking the bus for a read/alter/rewrite cycle, etc. These operations tend to produce a dramatic slow down because they defeat all the work that went into designing a cache and a pipeline, etc to speed things up. The other problem is on many CPU's the hardware guys decided that these operations should be reserved for kernel mode, so enlisting the hardware's help may involve an OS call with the corresponding high-overhead context switch.

In any case, I think this justifies Rule #1: Multithreading is hard.

Thursday, April 14, 2005

Multithreading considered

Peter said I should post this, so....

Hi Peter,

On 4/14/05, Peter Wilson wrote:
> Do you know of any books on threading in software design written at
> the level of Design Patterns?

Sounds like a great book. I want a copy! 8-)

There have been some interesting articles recently in C++ journal, but I haven't seen any of the "newer thinking on threads" gathered into a book.

This is going to become more critical RSN as the multi-core chips hit the market. Maybe I should write a book!

Rule #1: Multithreading code is hard.
Corollary: If you don't think it's hard, your code is wrong! (witness Java synchronized)

Rule #2: If the hardware isn't involved at some point, it's wrong.
There are no software-only synchronization methods. This doesn't mean you have to lock a mutex every time you touch shared data. It just means that somewhere in any thread safe technique there has to be a mutex (or the moral equivalent.)

Rule #3: Don't try to cheat -- particularly not for performance sake.
Multithreading buys you performance through parallelism, not
through shoddy coding techniques. (remember the Double Checked
Locking Pattern? (and see my blog for TCLP))

Rule #4: You need a model.
If you wing it, or play it by ear, you'll get it wrong (I'll put money on it.) Separate the thread-safeness from everything else and get it right in isolation. Then use encapsulation to keep "the next guy" from cheating.

Rule #5: Testing multithreading code is harder (and more important) than writing it in the first place.

how'm I doing?

Dale

Wednesday, April 06, 2005

Joel on Hungarian

I just got around to reading the third installment of Joel On Software's essays on the new FogBugz release.

In it he extolls the virtues of Hungarian notation. I was somewhat taken aback, since Joel usually makes so much sense and Hungarian is such an abomination, but then I noticed the context.

Hungarian notation was originally developed to overcome a deficiency in the C language and in C compilers -- weak type checking. Using HN you could do the "type checking" by eyeball rather than relying on the compiler. Once the language and compilers got smart enough to complain when you tried to assign the address of a SnaggleWhomp to a pointer to DeedleBlang then the justification for Hungarian disappeared -- leaving only it's significant drawbacks. artThe adjMost advImportant prepOf adjithinkThese nounsubjDrawbacks verbWas adjUnreadable nounobjCode.

However, the reason Joel gives for valuing Hungarian is that the home-grown Thistle compiler they use at Fog Creek has trouble compiling VB Net without it. Aha-- once again you have a defective language and a deficient compiler to compensate for and Hungarian rides again!

Tuesday, April 05, 2005

Supercomputers and tapestry weaving

There's always been a strong link between computers and weaving, but a recent New Yorker article looks at the relationship from a different perspective.

It's a long article so don't worry that the computers don't show up for a while.

Thursday, March 31, 2005

Hybrid User Interface

I really like my new Escape Hybrid, but I've started to notice some interesting UI issues:

I was stopping at a traffic light yesterday. I'd been driving a while so everything was warmed up. The gas engine turned off as I dropped below 20MPH -- as expected.

If the radio's not on it gets eerily quiet when you stop. Cool.

Then I took my foot off the brake. I noticed something unexpected. The car started to creep forward, just like "normal."

Hmmm...

In a normal car, the creep happens because the gasoline engine has to keep running. The torque "leaks" through the automatic transmission's torque converter. But for a hybrid the gas engine is off, and an electrical engine doesn't really need to keep spinning. In fact I'll bet the electrical engine was at a dead stop, too, when my foot was on the brake. Where is the creep coming from?

I'm betting that it's designed into the system to comfort those of us used to an automatic transmission. It reinforces the concept that a hybrid is "just like a normal car, only more efficient."

I wonder how much time Ford wasted getting this behavior to feel right. Personally I'd just as soon my car stayed where I put it unless I explicitly tell it otherwise.

Tuesday, March 22, 2005

Cross-Programmer Code

A lot of my programming work is intended to be portable across platforms where a platform is defined as a combination of operating system, computer architecture, and development tool set (compiler, etc.). ACE is a prime example of what it takes to achieve this goal.

However,

Even more important that platform portability is programmer portability. It is highly unlikely that any significant programming project will be developed and maintained by a single programmer for the life of the project. Every time a new programmer gets involved in a project the source code has to be "ported" into that programmer's model of the language.

Every programmer carries around a lot of mental baggage. Some of us are fresh-out-of-school apprentices -- lacking the pragmatic experience of a seasoned pro. Some of us are old fogies with fond memories of FORTRAN COMMON (who strive to recapture the glory using the Singleton pattern (chuckle.)) Some of us have been programming in C++ so long that we forget how arcane some of the "obvious" idioms are.

Fortunately, unlike computer architectures, compilers, etc. the port can work both ways. The code can be adapted to the understanding of the new programmer, or the new programmer's understanding can be adapted to the code. In fact there is usually much more of the latter adaptation than the former, although I have certainly been involved in situations in which it was easier to rewrite the code than to attempt understand it.

Recognizing how often programmers must adapt to unfamiliar code, and vice versa, we should make an effort to write programmer-portable code. With that in mind, I propose the "five programmer test."

Given a language feature or coding idiom, create a sample of code using that technique.

Select five programmers with skills ranging from average to superstar (below average programmers should be dumped on someone else's project.) Ask each of them to explain in English what the code does and to describe any limitations, consequences, etc. that need to be considered when using the technique.

If all five of them agree, then it's ok to use the technique.

If at least three of the five agree (and one of them is the superstar) then it's ok to use the technique, but it requires a comment to clarify the usage.

If fewer than three programmers understand the technique, or if any programmer "understands" the technique, but her explanation of what it does is way off base -- find another way to achieve the same goal that does pass the five-programmer test.

Tuesday, March 15, 2005

Another Interview Question

Another good interview question is, "Once you fix all the syntax errors and get a clean compile, what type of errors are most likely to still be in your code?"

Wrong answer: "None." End of interview. Have a nice life.

Most common answer: "I don't know."

Followup question: "So how could you find out?"

When I first asked myself this question (shortly after reading Writing Solid Code) my solution was to create a "programmer's diary." This was a background program that I could pop up with a hot key. It opened up an edit window into which I could paste or type information. It date/time stamped the entry then appended it to a sequential file and disappeared.

To use it, I'd select/copy code containing an error, pop open the edit box and paste it, then annotate it to explain the error. I did not do any further analysis in-line. Instead I went back to whatever I was doing -- fixing the problem or running more tests or whatever...

After capturing data for about a month, I analyzed the file. I categorized the types of errors into classes like:
  • uninitialized or improperly initialized variable;
  • sense of a condition is backwards;
  • failure to release resource when returning from a function;
  • difficult to use, difficult to understand, or easy to break feature of the language (think "goto" although I'd already stopped using those.)

Then for each class of mistakes I asked myself:
  • What can I change can I make to my coding style or work habits to prevent this type of error?
  • What can I change can I make to detect this type of error sooner?
  • What type of test would detect this type of error?

In some cases this resulted in changes in my coding style. For some cases I added new types of tests to my set of tools. In others, just the increased awareness of my error-of-choice was enough to help me avoid the error.

I continued to use the diary for a couple of months afterwards, and yes, there was a noticable reduction in the types of errors I had specifically targeted. Without benefit of statistical analysis, I also think there was a significant overall reduction in uncaught-by-the-compiler errors.

The downside of all of this is when I get into an argument (oops, I mean a reasoned discussion) about programming style issues I tend to be dogmatic about my style. That's because I think it's based on emperical evidence rather than aesthetics or arbitrary preferences. This would be a lot more valid if I had used the diary recently. My emperical evidence is from sometime before 1995 -- and of course it is specific to one programmer. Programming has advanced considerably since then -- in particular exceptions and patterns like RAII have changed the way I program. I wonder if it's time to fire up the old diary program.

Saturday, March 12, 2005

This one's for Jonathan

In the lobby where visitors sign in at Google, there's a Naked Juice vending machine.

The Computer History Museum

We visited the Computer History Museum yesterday. Lots of interesting artifacts including an Enigma machine, a Cray 1, etc. The item that captured my attention, though, was a white teapot. In fact, THE white teapot. If you've seen the test images from any 3D graphics program, you already know what the teapot looks like. And it does.

Thursday, March 10, 2005

Do you know the way to San Jose?

Do airplane trips ever just work? I mean have you ever gotten to the airport on time, sailed through security, found a comfortable seat next to a pleasant companion, arrived at your destination feeling refreshed and rested, and had your luggage make it, too?

Well...

Tina and I just flew to San Jose to visit Peter, and the perfect plane trip didn't happen (one more time.)

Let’s see,

  • My computer fell out of the back of Dave's van as he dropped us off at the airport (nothing damaged, apparently.)
  • When we checked our bags, they took my picture ID and wandered off with it. When I asked what was going on, they told me my name was on a "watch list." This must be the same evil Dale Wilson who is behind on his alimony payments and stiffed an eye-care place somewhere in Illinois. Kind of spooky knowing I have an evil twin.
  • Next was Tina's turn with security. "Is this your bag, ma'am?" said the TSA guy. "Yes," replied Tina. "I'm going to have to ask you to open it up." By then I was through security myself, so I didn't hear the details, but after I stood around for quite a while, Tina finally made it through. Apparently they confiscated a vicious looking nail file and were deeply suspicious about the dangerous drugs that were Not In Their Original Ibuprofen Bottles!
  • So the plane was a little bit late taking off.
  • And we landed in Tulsa. Most of the passengers go off, but those of us who were heading on to Phoenix on the same flight rearranged ourselves and got comfortable for the next leg. One of the stewardesses stopped by to admire my beard. She said her husband also had a long beard, and was giving Tina some really unfortunate suggestions involving braids and tiny lights, when the PA system announced we all had to get off the plane and go to another gate. There was vague mention of a "maintenance issue."

    When we got to the new gate, there was an enormous crowd. Much more than a planeful. It seems that they were putting us on a plane that was supposed to go to Dallas. Since the plane was going to Phoenix instead, now, the Dallas people were being sent to yet-another-gate where presumably they would eventually be put on their own replacement plane to replace the one we had just borrowed from them.

    This being Southwest, we were given new boarding passes that let us board first -- even before "families traveling with small children." If you want to get on a Southwest airplane first, get yourself a mauve boarding pass (honest they called it mauve!)
  • So that got us on the plane in Tulsa. We were 45 minutes late, but we were on our way. Nothing much went wrong during the flight to Phoenix (unless you count peanuts, but they're normal for a Southwest flight.) As we made our final approach into Phoenix they announced that Southwest was holding all connecting flights, and would anyone going to San Jose go directly to gate C2. Of course, our plane docked at the far end of the D concourse, but it coulda been worse. We hiked on over to C2 in time to walk right onto the plane shortly before they closed the doors. Things were looking up -- this plane was half empty so we had no problems finding seats.
  • Oops. Spoke too soon. As we arranged ourselves I realized that I had left the book I was reading on the other plane. "Do you think they'll let me go back and get it?" I asked Tina. Right.
  • So on to San Jose. We made up the time somehow and actually got to San Jose on time! The only unfortunate part of this leg was when the stewardess decided to sing as we taxied in to the airport. I guess it was endearing. A personal touch sort of like the early episodes of American Idol. "A bit pitchy," said Randy, "but not too bad." "I don't think this was a good song selection for you," said Paula, "but you've got a, um, loud voice." "I've heard better performances from the taxi driver that brought us to the hotel." said Simon.
  • Home free, eh? Our baggage made it! Calloo, Callay!
  • Then we got to Peter's car, and I said -- where's my computer? The brief case containing my computer--which I had carried on, so I couldn't blame the airline--wasn't there! In near panic I headed back to the baggage claim area. Fortunately it was sitting on a chair waiting for me. Airport security hadn't confiscated it as an unaccompanied bag -- probably because there was a woman there watching it. She said she saw us leave it and couldn't figure out what to do, so
    she was waiting a bit to see if we came back before calling security. Thank you, thank you, thank you.

      And so we're in San Jose. I do so love traveling.

Tuesday, March 08, 2005

Guidance toward the-one-true-path.

Paul Colton wrote an interesting article for Byte about XAML. Once sentence in the article was a real attention grabber:
XAML has many strengths, and Microsoft's ability to educate the marketplace and guide the .NET developer community may ultimately tip the balance to XAML.

Wow. I'm *SO* glad Microsoft is willing and able to educate and guide me. ;->

Monday, March 07, 2005

Thoughts Meandering From Interview Questions to Semantic Compilers.

I used to do a lot of interviews for programming positions. One of my favorite interview questions is:

What's the best book about programming you've ever read? What book should every programmer read?

(I know, that's two questions --- but hey, it's my interview (and my blog) so I make up the rules.)

There is no "right" aswer to this question, but there are a couple of wrong ones.

The worst answer is "I haven't read any programming books recently." The applicant can recover from this if she goes on to explain that she reads programming blogs, and magazines instead because the informaion is more current, but barring such a last-minute save, a negative answer is an interview stopper.

The next worst answer is a textbook required by a college course. If the interviewee is more than a couple of months out of college and hasn't learned anything since, well....

Responding with something like "How to write [fill-in-the specific-application] programs in [fill in the language] on [fill in the platform]" earns a barely passing grade. The interviewee is on the bubble and better come up with a reason why I should keep talking to them really quickly!

Oh yeah, and any book containing "for dummies" in the title is instant death <chuckle/>.

So, how would I respond to the question?

I might mention some recent read (ala "The Pragmatic Programmer.") or I might fall back on one of the very few books that have a profound impact on the way I program.

Looking back, there have been a lot of programming books, but very few that were life-changing. "Elements of Programming Style" counts (yes, that was a long time ago, but it's still worth reading.) Most programmers know about "Elements.." and it is actually used as a textbook in some college courses (which contradicts the "second-worst" judging criterion above. hmmm.)

Another book that had a major impact was "Writing Solid Code" by Steve Maguire. I'm not sure whether this was that good a book, or whether it just happened to be the right book at the right time for me. I should probably re-read it if I could find my copy.

One concept I acquired from "Writing Solid Code" is the idea of a compiler that checks semantics rather than (or in addition to) syntax. Wouldn't it be great if the compiler would tell you: "Yes, that's legal C++, but it's really not what you should have written." Actually, over time, compilers have gotten better at producing this type of warning messages. By flaging unused variables, statements with no effect, and such atrocities as:

if (a = b)

modern compilers bring your attention to possible semantic problems. This is a good thing.

But wouldn't it be nice if compilers could go further? If they could warn not only about obvious nonsense, but also questionable practices, bad algorithms, etc. we could write better programs faster.

One problem of course, is my "really cool technique" is your semantic abomination, and vice versa. In recent discussions here at work we haven't been able to agree on where the & should go in:

int & a;

Some say goes with a because you can say "int &a, b, *c;" to which I say -- not in MY semantic compiler you can't! Some say it goes with the int because it's "part of the type" to which I say, but how about const and static. Aren't they part of the type.

In case you haven't noticed, I think that & and * are first-class citizens and deserve to stand on their own rather than being piggybacked on another token -- but that's just my opinion and it is certainly subject to debate. It's also completely beside the point of this discussion.

Lack of agreement about what constitutes a good program makes it very difficult -- nay impossible -- to come up with a one-size-fits-all semantic compiler. So how about a compiler that "learns" your programming style, and flags departures? If you always indent by two, but happen to indent by four in one place -- well maybe that indicates the code is wrong. Better yet, if you always use braces (as you should (chuckle)) then an if statement with no braces should be flagged as an "error."

Hmmm. How well does this play on a team programming project?

Of course these are all "small scale" issues. The real benefit comes when the compiler can detect, for example, that you're doing a linear search of a (large enough) sorted list and suggest that a binary search would be a good idea here, or can look at an object full of get and set methods and advise you that you've really blown the encapsulization and should be writing domain-meaningful methods instead.

STL does makes some interesting strides in that area by declining to implement "unwise" methods on some containers. Java was also a step in this direction -- unfortunately a lot of the decisions that went into Java were one man's opinion so some valuable techinques were "tossed out with the bathwater." and some warts like uninitialized references made it into the language.

There's much more to be said on this topic, but this entry is getting too long. To be revisited...

Friday, March 04, 2005

The kind of person that keeps a parrot.

Tina was looking at my blog and pointed out that not everyone was familar with the Mark Twain quote. Although anyone who reads parrot mailing lists surely knows it, for the rest of you, click here.

Pet Peeve n+1: Thinking outside the box

The phrase "Think outside the box" ranks right up there with "Have a nice day :-)." and just barely below fingernails scraping on blackboard.

Most people use the phrase to mean "ignore the rules."

Trucker #1: Why are you stopping?
Trucker #2: Look at the sign on that overpass.
Trucker #1: Yeah, it says "Clearance 11 ft. 6 in." So what?
Trucker #2: The trailer we're pulling is 12 feet tall.
Trucker#1: (looks around) I don't see any cops. Let's go for it.

See. Trucker#1 is thinking outside the box the way most people use the phrase.

The origin of the phrase is a classic logic puzzle. Given nine points arranged in a 3x3 grid:

X X X
X X X
X X X

Draw a continuous series of four line segments that passes through each point exactly once.

The solution (which you know, right? (if not, there's a hint below)) involves extending the lines beyond the "borders" of the array. Hence, "Thinking outside the box."

You don't solve this puzzle by thinking "outside" the box. You solve it by realizing that there is no box outside of which to think! [Look again, do you see any box?]

Understanding the true constraints on a problem and finding creative solutions within those constraints -- good. Ignoring the constraints that happen to be inconvenient -- bad.


All of which applies to programming, too!

Hint:


X X X x
X X X
X X X
x

Wednesday, March 02, 2005

The Sins of Intel

Another entry in my Hack series. This one's a threefer:

A lot of people complain about the Intel architecture. It must have been really easy for hardware designers to build systems around the early Intel chips, 'cause you'd never find a software developer praising their design. Early Intel chips (and some not-so-early chips) were much harder to program than they needed to be.

Sin #1: A + (-B) != A - B
One of my favorite Intel sins, is that the engineer(s) who designed the early chips did not understand binary arithmetic! The way the chip was designed 5 + (-1) [that's five plus negative 1] did not produce the same answer as 5 - 1 [five minus one]! That one deserves an extra exclamation mark!

To explain:
On a four bit machine 5 + (-1) looks like:

0101
1111
1 0100

That 1 hanging out there is a carry bit. It is normal for a subtraction to generate a carry (it means you did NOT have to borrow!)

Unfortunately on Intel the flag that ends up holding that 1 is called the carry/borrow flag. If you add, it holds the carry. If you subtract it holds the borrow.

That means 5 + (-1) is four with a carry, whereas 5 - 1 is four without a borrow, but borrow and carry are stored in the same flag, so to do-the-right-thing with the carry bit you have to know how you got here.

The result is a lot of really sweet binary arithmetic techniques were just a little bit harder and messier (and hence a little bit less sweet) on the Intel machines.

Sin #2: Segment granularity

When Intel discovered they needed to go beyond the 16 bit address space of their early processors they added segment registers. This was a good move because it preserved compatibility with older software -- or at least made it relatively easy to migrate to the new processors.

The sin comes when you considered how the segment registers were factored into the address calculations. The segment registers have 16 bits -- making them easy to manipulate on a machine that's built around a 16 bit architecture, but in order to achieve the goal of extending the address space, the segment registers have to address units of memory larger than a single byte. Intel choose a 16 byte addressable chunk for the segment registers. That means the segment register is shifted four bits to the left when it takes part in the addressing calculations. The result is the 20 bit (one megabyte) address space that hampered the Intel processors for years! (Of course IBM and Microsoft managed to hack that into the 640K limit, but that's a different transgression.)

Suppose Intel had shifted the segment register by 8 bits rather than 4 bits. Downside is a granularity of 256 bytes rather than 16 bytes (big deal--not)

Upsides:

First of all, calculating the values to go into segment registers would have been much easier because it's a lot easier to move things by 8 bits than by four bits on an Intel chip, but thats only a minor advantage compared to the real plus which is that the address space just became 24 bits (16 megabytes rather than 1 megabyte) Admittedly 16 megabytes seems small today, but it took almost 10 years before we achieved that state. Countless man-centuries (yes and woman-centuries) were poured down the bottomless pit of extended memory and expanded memory hacks trying to compensate for Intel's short sightness.

Sin #3: The 80286
Ok so they forgot to figure out how to get from user mode back to kernel mode (D'oh) This inspired the truly byzantine technique of booting the whole damn machine to do a context switch back to the OS. Kool, eh!

Monday, February 28, 2005

Stealth wins

I finished weaving the "stealth scarf" (see previous dissussion) this weekend and just as I suspected the pattern is pretty much invisible. If you hold it at just the right angle in just the right light you can see it, but that's not what I was hoping for.

For my next bright idea.....

Wednesday, February 23, 2005

The AAM hack

Backgound
I was working on a graphics editor for the Zenith Z100 computer. This was a shrink-wrap product intended to be distributed by the thousands (well millions actually but Zenith didn't sell enough Z100's (sigh))

The Z100 had 640 x 225 x 8 color graphics capability (or 640 x 450 if you used an undocumented video mode) (Compare and contrast to CGA.) The Z100's graphics were memory mapped, but the layout of the memory was somewhat arcane. Converting a (X,Y) pixel address to a byte and bit address involved some calculations including a divide by 9

The problem

In a word: speed. One of my test patterns took 40 seconds to render to the screen. This is painfully slow -- probably slow enough to kill the project.
Profiling the C program showed that writing pixels to the screen dominated performance of the program. Profiling the pixel writing code showed that the divide by nine was dog-slow. This was on an Intel 8086 processor (an 8 bit processor with delusions of 16 bitness)

The Hacker
Dale (that would be me) Wilson

The Hack
I can't claim exclusivity for this hack. Other people found it and later it became pretty common, but I can claim that I found it myself.

The DIV instruction on the early Intel processors was a real cycle burner. Not only that, but it took a lot of setup and tied up a lot of registers on a machine that is extremely register-poor. It had to go. The good news is that the DIV instruction was a 32bit/16bit divide which is way more than necessary for this problem. In fact an 8 bit/4 bit divide would be sufficient.

My first approach to solving the problem was to implement Divide-By-Nine as a series of shifts and subtracts. Since the divisor was hard-wired the division could be done in straight line code. No branches, no testing the carry bit (oops, this is Intel, I mean the borrow bit). The result was about ten times faster than the DIV based code. The test pattern now showed up in four seconds vs the original 40 seconds. Good, but four seconds is still a bit sluggish.

Studying the Intel instruction set revealed all sorts of oddities. One interesting one was the AAM gap. The AAM instruction itself was a somewhat dorky instruction intended to be used as one step in multiplying BCD numbers. The acronym stands for Arithmetic Adjust after Multiply. Once you get past the noise about BCD multiplies (who multiplies BCD numbers? Maybe COBOL programmers (grin)) and look at what it actually does to the bits, you discover that it's really a divide instruction. It divides the AL register by ten and puts the quotient in AH and the remainder in AL.

The opcode for AAM is D4 0A. But looking at the Intel instruction set docs reveals that AAM is the only instruction with D4 as the first byte. That seems wasteful. And it is interesting that the second byte happens to be an "A" (i.e. a ten) A little experimenting (can you say self-modifying code (with a straight face?)) reveals that AAM is really a 8 bit "divide immediate" instruction. If you put a 09 in the second byte it divides by 9. If you put a '0' there, you get a divide-by-zero fault!

Back to the write-the-pixel fast algorithm. Out comes the shift-and-subtract sequence. In goes a AAM9 and voila, the render time is significantly under a second. Life is good. Ship the product. Everyone sees what a marvelous machine the Z100 is; Zenith rules; and we all become rich-n-famous (oops--except for that last sentence (sigh.))


Ok, judges. What's the score for this one on the sleazy(0) to righteous(10) scale?


Extenuating circumstances

I left one point of information out of the discussion above. I left the shift and subtract code in the program. At startup the program tested to be sure AAM9 did the right thing. If the test failed, it fell back to shift and divide.

My Rating
Since this is my own hack it's hard for me to be objective but without the startup check, I'll give it about a 3. Using undocumented features of the processor in commercial software is bad news. With the check, however, I'd go as high as a 6. After all is success in the market requires speed and this is the only way to get the speed...

As it turns out, however, this wasn't the only way to get the speed. In a later version of the program I switched from thinking about the screen as a collection of pixels to thinking about it as a collection of horizontal lines (some of which were very short.) This gained almost as much of a speed increase as the switch from shift-subtract to AAM9, and it didn't involve any undocumented ops.

And one last point.
You may wonder why I didn't use the "normal" technique of rendering into an off-screen buffer then blitting to the actual video buffer. Good question, and the answer comes back to available memory. Zenith sold Z100's with as little as 128K bytes of RAM for programs + operating system. (The video buffer was separate.). A screen buffer takes 54K which means there just wasn't enough RAM available to do an off-screen buffer right.

Monday, February 21, 2005

The FNO Hack

Background
Like the previous hack, this took place on a Honeywell 6000 computer running GCOS. We had an application that used a random access file for persistence in a high traffic environment (~50 allocations/second for sustained periods of time.)

To understand the hack you need to understand how floating point math was implemented on the Honeywell 6000 machine.

The H6000 worked with floating point numbers in the EAQ register where E is an eight bit exponent register and AQ is a double-word (72 bit) general purpose register used as a mantissa in this case. Floating point operations did not always produce normalized results, so a special instruction, FNO, normalized the current contents of the EAQ register.

Floating point normalization
Consider Avogadro's number = 6.022 × 10**23. Another way to express this is 0.602 x 10**24. The first version is normalized (base 10). The second is not. Notice the loss of precision. Binary normalization works the same way. To normalize a number you shift it left to get rid of leading zeroes and adjust the exponent appropriately.

The problem
The challenge was to control the allocation of blocks in the random file. Remember memory is tight, so a free list is not a good solution. Instead we used an allocation table with each bit corresponding to a block in the file.

How can you find an available block as fast as possible.

The hacker Bob ("If you need comments to understand my code, you shouldn't be reading it!") Miller

The hack
An important decision was to represent available blocks as one bits and used blocks as zero bits. This allowed testing 72 bits at a time when looking for an available block by loading the AQ register and branching if the result was non-zero. Thus the the algorithm was:

loop:
load AQ tablepointer (auto incremented by 2 words)
jump if zero to loop
and now the hack:
load the E register with zero
FNO (floating point normalize)
copy the E register to a general purpose register.
Add 36 * the word offset in the allocation table

to get the available block number.




Rate this hack from 0 (slimey) to 10 (righteous).



Important factors in my rating of this hack are:

Although it uses an instruction for "the wrong purpose" if you read the description of the instruction in terms of how it maniuplates bits it does all the right things. In other words as long as Honewell doesn't decide to change how they represent floating point the hack should continue to work.

This is a high use function that has a direct and visible impact on performance.

Rating
I give it a 9 out of ten (If Bob had believed in comments it might actually be a 10).

Friday, February 18, 2005

The IOCW hack

Setting the scene

The time is the early 1970's. The computer, a Honeywell 6000 running the GCOS operating system. Some characteristics of this system:

Addressing space was 18 bits, but fortunately the addressable unit was a four byte word so addressable space was one megabyte (where a byte has nine bits, but that's irrelevant). The OS had virtual memory but individual processes were still limited to the 1M space.

The system had a separate I/O processor (known as the IOC) that handled all input and output. The IOC was progammable using a very limited instruction set (I/O control words (IOCW)). To perform I/O you created one of these programs in your memory and asked the IOC to execute it. You could do scatter/gather I/O using IOCWs. You could also branch (but not conditionally) from one set of IOCWs to another.

When an application requested I/O the OS validated the IOCW's then started the I/O. The OS handled interrupts and status coming back from the IOC, then passed the results (and a limited form of interrupts) back to the application.

The Problem


Memory is at a premium, and dynamic memory allocation is tough in assembly language. Where do you put the IOCW's?

The Hacker

Jim Mettes (who happened to be my boss at the time.)

The Hack

Put an IOCW directly in the read buffer. The IOC fetches the instruction from the buffer first, then reads the data into the buffer -- overwriting the IOCW which is no longer needed. Four bytes (the size of an IOCW) saved!



At this point you should decide for yourself where this hack rates on the slimey to righteous scale. Zero is ultra-slime and 10 is mega-righteous.



The rest of the story.

Consider what happens when a recoverable I/O error happens. The IOC reports the error. The OS determines that it is retryable and asks that age-old question: "Abort, Retry, or Ignore?" In this case the question goes to the console operator, not the person running the program. "Retry" says the console operator because that's what his proceedure manual says to answer.

The OS tells the IOC to try again. The IOC refetches the IOCW -- but it's been overwritten by the data from the previous attempt. Remeber the point about the OS validating the IOCWs before issuing the I/O request? Guess what the designers of the OS forgot to do during the retry? If you were lucky you ended up with a core dump of your application. If you weren't quite so lucky the system administrators ended up with a core dump of the whole machine and came looking for your head!

The Score

I give it a 1. High danger, low payoff. And how do you tell your boss about it?

Wednesday, February 16, 2005

Thought #2 for the day

Some code idioms are inherently bug-prone. High on my list of offenders:

abc()->widget();


abc() presumably returns a pointer to an object that has a widget method. But what if it doesn't have a pointer available?

Programmer: Doctor, it hurts when I do this.
Doctor: So don't do that!


now if abc returned a reference rather than a pointer....

Dale

Fragile programming

There was a time when I practiced defensive programming. The theory was that code should be able to handle anything thrown at it.

What a bad idea!

If you're writing a square root function and someone hands you a negative number, DON"T fix it for them. Abort, throw an exception, trigger an assert, whatever! Just be sure that they know as soon as possible that they've done you wrong. (A compile-time error would be ideal.)

This rant was triggered by code throughout ACE and TAO that looks like:

if (do_some_function () != 0)
return -1; // this should not happen

(Yes, the comments are really there in the code.)

Adding an ACE_ASSERT (and the missing brackets which should have been there in the first place) to one instance of this "idiom" revealed a longstanding design error in ACE's handling of Thread Specific Storage.

I'd rather work with fragile code than helpful code!

Just to be thorough, the code should be:

if (do_some_function () != 0)
{
ACE_ASSERT(false);
return -1;
}

not:

ACE_ASSERT (do_some_function () == 0);

Tuesday, February 15, 2005

Relativity values

In thinking about historical hacks I realized I needed to set the scene to make some of them understandable.

This started me thinking about the computing world and culture then vs. now. Everybody knows about the dramatic drop in price of electronics, but I'm not sure everyone has a gut feeling for the magnitude of the change. In fact, I often forget, and I lived thru it.

So: thought for today.

When I started working as a programmer, if I saved my entire salary (no food, no clothing, etc.) I could buy the computer I worked on in only 333 years.

Today I could buy a much more powerful machine every other day!

That helps to explain why we were willing to invest months and sometimes years of programmer time for relatively small gains in computer time. Thereby motivating hacks...

Friday, February 11, 2005

The Hack Spectrum

Near the end of an otherwise excellent presentation on the boost shared pointers(boost::shared_ptr), Jonathan showed a slide with the title "Tricks." On this slide it explained how to create a smart pointer to a stack based or static object. This immediately raised by "hack"les.

Pointing a smart pointer at an auto variable violates the semantics of the pointer. The fact that it is syntacticly correct is not an excuse. In fact I consider this a defect in the design of the smart pointer -- a well designed smart pointer would not allow such nonsense. [Note what constitutes a well design smart pointer [IMO] is another topic that I might get into later...]

Jonathan defended the trick by saying "as long as you know what you are doing, it's ok." Alas, that's not correct. A more accurate statement would be: "As long as you and every programer who touches the system either before or after you is aware of all of the implications of this trick, then it's not quite so bad." What makes this hack so insideous is that it doesn't break your code; I breaks mine, months or years later. What's worse, once I find the true cause of the "bug" in my code I have lost confidence in the reliability of smart pointer. This means that every time I use smart pointer I have to consider and either eliminate or allow for the possibility that some programmer will have violated the semantics of the pointer somewhere in the system.

In Jonathan's defense, the technique he described comes straight from Smart Pointer Programming Techniques page on the boost web site. My quibble is not with Jonathan, but with the author(s) of boost::shared_ptr and that web page.

This has made me ruminate for the last few days on the various hacks I have encountered, or in some cases perpetrated over the years I have been developing software. Using the terminology from my early days as a programmer, I realized that there is a spectrum of hacks ranging from a "slimey hack" (one that works but makes you want to wash your mind out with lysol after you read it) to a "righteous hack" (one that makes you step back in awe at its beauty and clarity.)

So I thought I'd document some of these historic hacks and explain where I think they fit on the slime-to-righteous scale. Watch this space...

Thursday, February 10, 2005

Swig vs C#

I spent some time recently evaluating SWIG . It's a nice way to expose your C or C++ interfaces to perl or python (and possibly other languages) -- although it has some perfectly understandable limitation on what types you can use. These limitations are surmountable by writing some helper functions.

Unfortunately what the customer wanted was to access the interface "from .NET" (which I translated to: from C#. )

Swig has a -csharp option [Digression: Swig's argument are positional flags!!!!. That is "swig -csharp -c++ " doesnt work! You have to say swig -c++ -csharp. Not necessarily a confidence builder.] Unfortunately the entire documentation for the -csharp option consists of about 20 lines in the manual and these lines are not terribly helpful since they show how to access global C variables using Mono's version of C#.

"Google swig csharp" or "google swig c#" didn't produce anything more helpful. (actually this blog may now show up in that search and I've already explained more than any of the other hits did (chuckle))

There's a SWIG wiki page, but when I rummaged around on it the only relevant thing I found was a FAQ (with no answer) that turned out to be pretty useless.

Bottom line, I think it'll be easier to write a managed C++ bridge layer to expose the C++ interfaces to the .NET world. We'll see (if I ever get back to this issue.)