Monthly Archives: March 2011

How I discovered a bug in the C# compiler, Part 1

http://philosopherdeveloper.com/posts/how-i-discovered-a-bug-in-the-csharp-compiler-part-1.html

Advertisements

Poll! What should my next blog topic be?

Check it out! A sweet new poll! (For an explanation, read below.)

My personal bandwidth has been significantly reduced lately (in case you haven’t noticed), for a number of reasons. The main one is not a lack of time, believe it or not; rather, it’s a lack of mental resources. It’s been a while since the last time I was seriously mentally invested in a project; now that I am—and I will most likely write about that at some point in the future—I find that by the time I get back to my hotel each night, I am so mentally spent that it’s difficult to focus on anything meaningful. (Oh no, I think I’m discovering what most working adults experience every day!)

One area in which this has become particularly obvious is my blog. Where I used to try and post something relatively thought-provoking and meaningful on a somewhat regular basis, I now struggle to update at a frequency anywhere even near my original goal of daily (or even my updated goal of 5 days a week). And when I do post, the quality is, in my (perhaps overly critical*) opinion, not as high as it should be.

So I had this idea. Because a lot of promising ideas have been stewing around in my head for a while. The problem is simply that I’ve been having trouble mustering the energy to give any one idea the attention it deserves. And with each new idea that I have, the more difficult it becomes to actually pick one and spend a solid hour or two writing a post about it. So what if I delegated that decision to you, the readers? There ought to be at least a small handful of you who would be interested in picking what I write about, yes?

This will be a little experiment. Here is a list of some of the topics I’ve been interested in covering (with only brief descriptions of each); if you care to vote for one, please do so!

  1. I’ve been pair programming over the past couple of weeks with a guy who has a very different style from me. Whereas I tend to see a solution to a problem right away—maybe not the right solution, of course, or even necessarily a good solution, but one that can be implemented quickly and in such a manner that it shouldn’t be difficult (in my mind) to go back and improve it later—he takes a much more controlled, deliberate approach to his code. One way I would describe our difference is by saying that he focuses more on the process of writing code and how it can be done safely, while I focus more on the code and getting features built.

    This guy recently made a comment to me that has gotten me thinking quite a bit. He talked about a colleague of his who was incredibly smart, but who designed software in a way that required him to make his brain work really hard. And by doing things in such a way, he actually subverted his own ability to come up with more creative solutions to problems because his brain was maxed out on technical issues.

    I like this idea, but at the same time I sometimes feel that developers can become obsessed with not making their brains “work really hard” and end up being flat-out lazy, refusing to do relatively easy work because they prefer to reserve their brainpower for scenarios that are unlikely to occur (i.e., when you’re working on a fairly simple project, the need to utilize your brain to “think outside the box” may be slim to nil). So I was thinking about writing a blog post on this topic.

  2. I recently got into quite a debate on Stack Overflow over the following statement (which I disagree with):

    Value types aren’t subtypes of object until they’re boxed.

    If you’re interested in seeing some of the debate, take a look! I was thinking this could actually make a good blog post; but honestly, it might be too big a topic to cover. Every time I think about what I’d write, my mind fills up with about a hundred different points and related concepts.

  3. Some time ago I discovered a bug in the C# compiler. I actually came across the bug (and even wrote about it already, but not in much depth) while writing some NUnit tests for a library I’ve worked on. I was thinking it could be interesting to write a post outlining the process of discovering this bug, starting with the unit test, some head-scratching, some research including an in-depth look at the C# spec, and finally getting confirmation from Eric Lippert (principal developer on the C# compiler team) that it’s a bug. Any takers?
  4. Here’s a totally different alternative. I could resume work on either VariantCollections or Tao.NET (both of which I believe in but haven’t had much of a chance to work on lately) and write about my progress on either of those.

So, what’ll it be? If you have any interest in any of the topics I just listed, let me know by voting in that sweet poll up top!

*But then again, perhaps not.

Stuck on Subversion? Fear not!

One of the more annoying aspects of the project I’m currently on is that our source code is committed to an SVN repository. Now, don’t get me wrong; it isn’t that I think Subversion is crap. In fact, my first introduction to version control itself was SVN; you could even say I have a soft spot for the software.

But here’s the thing: when you’ve got a reasonably large team, and you’re employing continuous integration, and commit freezes occasionally happen (e.g., because the build went red), it just starts to get really frustrating not being able to do local commits without having to push to a repository (and without having to create an entirely new branch for what could well be a small change).

The “right” solution to this problem (in my opinion) is to switch to a distributed version control system like Git or Mercurial; but sometimes, due to organizational constraints, that isn’t an option. So, too bad, right?

Think again! I was quite pleased earlier this week to discover that I was able to install hgsubversion, which lets you clone an SVN repository on your local machine, enjoy all the benefits of Hg locally (most notably local commits), and then use Mercurial as a Subversion client, so that “pushing” using Hg becomes equivalent to “committing” using Subversion.

My understanding is that the same thing exists to bridge Subversion to Git; it’s called git-svn.

Basically, I am way too tired to write a long and thoughtful post tonight; so this is just a useful piece of general advice. If you find yourself frustrated with SVN, for any reason, consider trying either hg with hgsubversion or git with git-svn. They can make your life so much easier.

The curious triangle

Here’s a little something that’s puzzled me for a long time now.

In math there is a concept called an asymptote. It represents a limit that a function approaches but can never quite reach.

Limits are a very important concept in mathematics. They allow us to reason about infinity in ways we couldn’t otherwise.

Consider the common example of calculating the area under a curve. Without using calculus, the best method of doing this involves representing the area with several rectangles whose edges touch the bottom of the curve and summing the areas of these rectangles:

The greater the number of rectangles used, the more accurate the approximation.

The concept of an integral is that it provides a way of actually determining what this sum approches as the number of rectangles goes to infinity.

Pretty simple to understand, right?

So here’s what’s so odd to me. Take the following jagged shape and figure out its area:

10, right? Pretty easy. Now, see how there are 4 “steps” along the jagged side? Suppose we increase that number to 8:

The area within this shape is now 9. If we further increase the number of steps to 16, the area becomes still smaller (8.5). Without thinking too hard about it, we can see that as the number of steps goes to infinity, the area of this shape will approach that of the triangle below: (4 x 4) / 2 = 8.

But let’s go back a second. When we had 4 steps, what was the length of that jagged side?

Each “step” has two sides of length 1 each. So, 4 * 2 * 1 = 8.

And what about when we had 8 steps?

By the same reasoning, 8 * 0.5 * 2 = 8 again. Huh, weird. And for 16 steps, it would be 16 * 0.25 * 2 = 8 yet again.

Again without thinking too hard about it, it seems clear that the length of that jagged part of the shape, and thus the perimeter of the triangle to which it converges, remains constant: the jagged part stays at 8, making the perimeter 16—same as that of a square!

So basically, the two shapes below could have the same area, while one has a perimeter of 8 + 4 * sqrt(2) but the other has a perimeter of 16.

Isn’t that weird?

Our encapsulated world, Part 2

Last week I wrote a short post discussing the concept of encapsulation. It was a short little post, primarily because I ended up being too tired to write as much as I had wanted to.

Well, at the moment I am sitting in the Phoenix airport for a 2-hour layover with nothing else to do (and therefore no excuse). So let’s explore this topic a bit more.

I already described encapsulation as the process of conceptualizing a system of many interrelated parts as a single unit. In Gӧdel, Escher, Bach (or GEB), Douglas Hofstadter utilizes a different term to describe the same notion: chunking. The image he evokes is that of taking many small things and assembling them into a distinct “chunk” that can be studied and acted upon from outside—i.e., from a higher level—without requiring much (or any) knowledge of the aforementioned “many small things” within it. Hofstadter provides an excellent series of examples of this process in the realm of science, starting with the notion of a quark as one of the smallest such “interrelated parts” composing larger-scale systems (i.e., protons and neutrons) that we know of.

[O]ne does not have to know all about quarks to understand many things about the particles which they may compose. Thus, a nuclear physicist can proceed with theories of nuclei that are based on protons and neutrons, and ignore quark theories and their rivals. The nuclear physicist has a chunked picture of protons and neutrons–a description derived from lower-level theories. Likewise, an atomic physicist has a chunked picture of an atomic nucleus derived from nuclear theory. Then a chemist has a chunked picture of the electrons and their orbits, and builds theories of small molecules, theories which can be taken over in a chunked way by the molecular biologist, who has an intuition for how small molecules hang together, but whose technical expertise is in the field of extremely large molecules and how they interact. Then the cell biologist has a chunked picture of the units which the molecular biologist pores over, and tries to use them to account for the ways that cells interact.

These examples are easily mapped to completely different domains where similar “chunking” takes place. For instance, consider the structure of a human society. At perhaps the lowest level, we have individual people with very specific desires and concerns. Then individuals form into households, which in turn compose neighborhoods, which in turn make up towns. You don’t need me to explicitly continue the progression; in your mind you already see that this sequence can be carried further with cities, counties, states/provinces, etc. (surely there are concepts like this in just about any governed area, though the specific terms denoting them may be different).

And just as Hofstadter’s examples included objects (fields of study) as well as subjects (scientists practicing within these fields), so can we discuss groups as well as leaders when it comes to human society. Neighborhoods may or may not have councils that oversee their community facilities; for towns and cities there are mayors, for states there are governors, and so forth. And like a chemist whose primary focus is at the molecular level, and who doesn’t necessarily need to have a deep understanding of the interactions among protons and neutrons, a state governor maintains a “big picture” view of the affairs of his or her state, typically without spending a considerable amount of time investigating the specific low-level issues facing each individual town and city (maybe some, but likely only the major ones) within it.

Another way of putting this is that we could say a state governor has a “chunked” view of his or her state’s towns and cities. He or she might view each as a single unit—an encapsulation of the neighborhoods, which in turn are encapsulations of their many households and individuals—and be interested in high-level facts, such as which towns have high crime rates, which have budget issues, etc.

Perhaps not-so-coincidentally, GEB contains yet another example of encapsulation that deals with societies, but not human societies: ant societies (i.e., ant farms). Hofstadter gives a fascinating account of how ant farms can exhibit high-level meaning even as the individual ants within them have no concept of the “bigger picture” to which they belong. For example, if certain members of the ant farm locate food, they will emit signals which are detected by other nearby ants from the farm, who will follow these signals and converge on the site of the food. The ants will all then form into a sort of “team” whose purpose is to deliver the food to some location that needs it. Hofstadter refers to such a “team” of ants as a signal, for reasons which I probably couldn’t explain as well as this excerpt:

Let’s say a signal is moving along. As it goes, the ants which compose it interact, either by direct contact or by exchange of scents, with ants of the local neighborhoods which it passes through. The contacts and scents provide information about local matters of urgency, such as nest-building, or nursing, or whatever. The signal will remain glued together as long as the local needs are different from what it can supply; but if it CAN contribute, it disintegrates, spilling a fresh team of usable ants onto the scene.

What Hofstadter has done is proposed a way of looking at ant farms that encapsulates the movement of groups of ants within the colony as signals which travel from one part of the colony to another, much like signals in the brain which travel via neurons. Considering them in this way, it is possible to visualize the same activities of an ant farm from a “high level” without even imagining ants at all! The ants in this view are something like a medium through which messages can be carried, and their role could just as easiliy be played by neurons or any number of other things.

But anyway, back to humans. An unfortunate but well-known weakness of the hierarchical system of human societies is that each additional layer of government adds a notorious layer of corruption and inefficiency. I don’t know well enough to say whether the “corruption” part is universal, but it seems almost unavoidable that the “inefficiency” part is; when lower levels are reliant upon higher levels for assistance in solving a problem, there is overhead involved in utilizing the proper channels to reach the higher levels, gaining the necessary approvals, waiting for the applicable resources—so as not to fail too miserably in trying to outline sources of inefficiency here, I won’t even attempt to exhaustively list them all. But there are undoubtedly more.

This is a bit ironic, because often the people most qualified to solve problems at the low level are those who are intimately familiar with those problems; but they are not the ones with the authority or the resources to put solutions in place.

This warrants another one of my nerdy software parallels, if only because I think it’s illuminating to consider how similar human society is to a computer system in this respect.

The similarity is this: computer systems, like human societies, can be broken down into smaller and smaller components at lower and lower levels of detail. At arguably the lowest level are the actual machine code instructions understood by a particular processor; above this are very primitive constructs offered in low-level languages such as assembly which aggregate groups of instructions into logical units; then above this are slightly more elevated constructs in slightly higher-level languages such as C. As you move higher and higher in this progression, you proceed through more and more levels of encapsulation, with each level (often with its own corresponding higher-level set of languages) viewing each component in the level below as a single “chunk” without requiring in-depth knowledge of the inner workings of that chunk.

This is actually an extremely powerful strategy that we humans have discovered as a way of extending our abilities as a species far beyond the abilities of our individual brains.

Consider what I’ll call a “high-level software application” such as Microsoft Excel. This is a fairly advanced tool that requires a rather complicated system (namely, the Windows operating system) in order to be useful. This complicated system in turn utilizes drivers and software libraries in order to run properly; these drivers rely on hardware with specialized instruction sets; each hardware device in turn relies on the correct functioning of its constituent parts. This progression could almost continue forever. What’s worth noting is that, once again, we are describing a series of systems wherein an expert of one level does not have to understand the inner workings of the level below. That is, the development team responsible for Microsoft Excel did not necessarily need to understand the inner workings of Windows in order to get their work done; they merely needed to understand certain components of Windows, and how they could interact with those components (e.g., how to interact with the file system in order to save a spreadsheet to disk)—those chunks as it were.

A well-known trade-off required for all this encapsulation is that the further software gets from the actual hardware it is running on—that is, the more layers of encapsulation separating the software instructions from actual hardware instructions—the greater the cost to the software’s performance. Modern high-level languages such as Java or C# require entire virtual machines to be running for the sake of portability. Dynamic languages like Python and Ruby require interpreters that require even greater overhead. These layers upon layers introduce a certain kind of inefficiency just like the multiple tiers of government in human society introduce inefficiency.

I would keep going, but at this point I am quite exhausted; so it will have to wait for a future installment. As it stands, I think attempting to draw parallels among the fields of science, the levels of human society, ant colonies, and various programming languages was quite enough for one blog post—regardless of how rambling and incoherent that attempt may have been!

Getting caught up in completely the wrong thing

Today I got involved in a discussion at work about a feature the client wanted to implement for a particular scheduled process.

Basically, the client wanted to be notified via an e-mail when the scheduled process was finished. Perfectly reasonable.

The tech manager, however, had an additional concern: what if the data acted upon by the scheduled process were not present, in which case the scheduled process might never complete? This was a serious problem, he said, and he wondered whether this hypothetical issue ought to be mentioned to the client.

Now, to be fair, yes, the right thing to do was to ask the client (which is what the team did). But what I found funny about this discussion was this: if the data relied upon by the scheduled process were not there, it would cost the company literally millions of dollars. And the tech manager’s main concern was that if the scheduled process did not complete, then it would be automatically killed by another process, which would raise a ticket for some other team in the organization to deal with.

To me, this is like planning a party, deciding it would be a good idea to have the party catered by a Chinese restaurant and then worrying, “Oh no, but what if aliens invade Earth and decide to enslave humanity, and they don’t like Chinese food?”

Whether our hostile alien overlord guests were not fans of our food selection would be among the least of our worries in this scenario. We’d be enslaved by aliens.

Our encapsulated world: Part 1

This begins a series of posts about a concept I find fascinating. I plan to write on this topic in the form of a series because, frankly, I can’t imagine covering all the ground I want to cover in one post. I have too little free time these days to write epic blog posts very often anymore, sadly. Maybe that’s a good thing, though, because I have so much on my mind about this subject that it could probably get pretty overwhelming if I crammed it all into one post. Hopefully I don’t put anyone to sleep.

What I want to write about is, in a word, encapsulation.

“Encapsulation” is a term software engineers use all the time. We use it to describe cases where a complex (or at least non-trivial) set of software components is “encapsulated” within a single component, allowing developers to use that component in a flexible, reusable way without always having to juggle around its constituent parts individually. Outside the world of software, it can have more or less the same meaning. Whenever you hear of “encapsulation,” the idea being described is that of taking a system of parts and conceptualizing it as a discrete unit, which makes it easier to interact with in a comprehensible way.


Some great examples exist in the realm of data structures. Generally, low-level programming languages (such as C) do not offer out-of-the-box components for representing the concept of collections of objects. At best, they provide a single, highly primitive construct: an array, or possibly even just a pointer (a location in memory), which can be used along with a length to denote a contiguous block of memory. This block can then in turn be used to store several pieces of like data (e.g., numbers or dates).

Using such constructs is tedious at best, hazardous at worst. Programmers have to worry about off-by-one errors, buffer overflows, dynamic allocation/deallocation of memory, and all kinds of other nightmarish things. But this isn’t even the important point. The important point is that you can do very little useful work with this very basic concept, without having to write thousands of lines of laborious code. Suppose you want to store data in something resembling a table, allowing lookups by unique key? What if you want to maintain a queue of items, to be processed in the order they are received? Or what about some sort of branching hierarchical structure? Good luck without encapsulation.

On the other hand, with encapsulation, it quickly becomes amazingly simple to do things that would otherwise seem impossible. If I give you an array and I say, “Now I want to insert an item in the middle of this array,” you have a frustratingly difficult task ahead of you.

Disclaimer: the code below is horrible. I refuse to claim otherwise!

template<typename T>
int insert_element(T** array, int length, int capacity, int index, T element)
{
	// Determine the location of the last element in the array.
	int lastPos = length - 1;

	// Check if the array contains as many elements as it can hold.
	if (lastPos == capacity - 1)
	{
		// Store a pointer to the old array for deallocation.
		T* temp = *array;

		// Allocate extra space for the incoming element.
		capacity *= 2;
		*array = new T[capacity];

		// Copy all the elements over from the old array.
		for (int i = 0; i < length; ++i)
		{
			(*array)[i] = temp[i];
		}

		// Deallocate the old array.
		delete [] temp;
	}

	// Shift every element over by one down to the insertion point.
	int insertionPos = index;
	for (int i = lastPos; i >= insertionPos; --i)
	{
		// Ugh... so much work!
		(*array)[i + 1] = (*array)[i];
	}

	// Insert the element.
	(*array)[index] = element;

	// Return the new capacity.
	return capacity;
}

Pretty gross, right? Now let’s try and perform the exact same task, this time with a very useful encapsulation of the above logic which performs pretty much exactly the same work while requiring a developer to write far less code: the std::vector<T> class.

template<typename T>
void insert_element(vector<T>* array, int index, T element)
{
	array->insert(array->begin() + index, element);
}

Hmm, 40 lines became 5 lines. How is that possible? Quite simply, that entire mess of code up top, comprising an array, a length, a capacity, etc., is all essentially wrapped or encapsulated—this concept goes by many names—inside the insert operation, which is just one of many operations supported by the std::vector<T> class.

So that’s one way of interpreting the word “encapsulation”: as a way of representing a detailed interdependent system of many parts as if it were a single entity.

There are many more layers of meaning to this word, which I intend to explore in future posts. At the moment I have run out of steam; but in the meantime, how many examples of encapsulation can you think of in everyday life? I honestly believe that the more you think about it, the more astounded you will become by how ubiquitous it really is.

Anyway, I’ve got plenty more to share over the coming days. Don’t you worry!