Monthly Archives: January 2011

A LINQ riddle

OK, this is inexcusably nerdy, but…

Are these two LINQ extension calls opposite?

bool allNumbersAreEven = numbers.All(x => x % 2 == 0);
bool noNumberIsEven = !numbers.Any(x => x % 2 == 0);

What do you think? It seems like a yes, doesn’t it?

The above two lines of code are logically similar to:

All fribbles are wibbles.
No fribble is a wibble.

Can both statements above possibly be true? In my personal opinion, no. But it does admittedly depend on your definition of “All.” Suppose there is no such thing as a fribble? Then certainly the second statement, “No fribble is a wibble,” must be true—no fribble is a wibble because no fribble is anything. But what about the first? “All fribbles are wibbles”? Seems kind of odd, doesn’t it?

I would argue that to speak of “All” of a nonexistent entity is pretty meaningless. And so I would expect the following to output False:

var numbers = new int[0]; // Empty set
Console.WriteLine(numbers.All(x => x % 2 == 0));

But: it outputs True. Very strange, if you ask me!

So the designers of the LINQ extension methods All and Any apparently felt that a statement about something that doesn’t exist is by default true. So I guess all fribbles really are wibbles… and all pluborgs are klugorgs… and all morbablobs are snorpaklods.

Advertisements

The daily what now?

I have a confession to make. I really enjoy reading The Daily WTF.

For those of you who aren’t familiar with it, it’s a website where software developers and other people in the IT industry post horror stories about the incompetence they’ve had to deal with in the workplace. Often these “stories” come in the form of code snippets written by coworkers—typically, these snippets are messy, inefficient, pointless, hopelessly confusing, or any combination of these (and often all of them at once).

It’s an amusing website, for sure. So why did I say this is a “confession”? Because I also think it’s kind of horrible. The Daily WTF (TDWTF) is basically where IT people go to look down upon others and laugh at their mistakes. It isn’t constructive; it isn’t educational. It’s just amusing, nothing more.

OK, so, big deal, right? Everybody likes to have a chuckle every now and then—what’s the harm?

Personally, I believe that the state of the software industry is… pretty sad. To be sure, there’s amazing stuff out there being developed, even at this very moment; but for every well-written, smartly designed, properly documented software system out there, there are a dozen crappy, brittle, unmaintainable heaps of garbage.

Why? Are a lot of software developers just dumb? I really don’t think so. I don’t think a person gets to be a software developer by being dumb. We’ve all got to have a certain base level of intelligence, or there’s no way we can make it in this field.

I just think there’s way too much of an attitude of superiority. And of thinking that “my code” is elegant and beautiful, while “your code” is ugly and stupid.

It’s such a shame, because if instead of getting pleasure from looking down on others for their mistakes, and posting their ugly code on TDWTF for all to scoff at, we were instead to examine our own work more critically and ask ourselves how we can improve, I think we software engineers could collectively increase the quality of our work considerably. And if we were to actually try and help our coworkers when we recognize areas where they can improve, instead of laughing at them for their ignorance, we would all undoubtedly benefit from one another’s shared knowledge.

Here’s another confession: I know I’ve written my fair share of code that could easily end up on TDWTF, if only a colleague stumbled upon it and felt cruel enough to submit it. But I also think that recognizing this, and assuming the humility that realization inevitably instills, is likely to help me improve myself as a developer far more effectively than I ever could by reading Daily WTF articles.

Not that I’m going to stop reading TDWTF, mind you 😉 As I said, it is an amusing read. I only need to remember, with every amusingly terrible code snippet I read, that it could’ve been me who wrote that.

Really, if we can’t always look back on our own work from 1–2 years ago and say to ourselves, That could easily be submitted to TDWTF, maybe we should worry that we’ve stopped growing as engineers? When you look at it from that perspective, it seems that maybe TDWTF has a constructive purpose after all—as a powerful motivator to keep getting better. You don’t want to end up on The Daily WTF, do you?

How disappointing…

I know, I know. Where was my post on Saturday, right? Well, this is Saturday night, and I’m going to treat it like it’s my Saturday post, even though it is definitely past midnight (however, I will be cheating the system and manually setting the post time to January 29th, so take that, WordPress!).

The truth is that I’ve been a) busy, and b) kind of drawing a blank. I did update my previous post to cover more of the stuff I had wanted to cover when I started writing it; the downside of that, for those of you who are brave enough to go back and read it, is that it has now become quite long. And possibly boring.

As I said, I don’t really have any great ideas of what to write about right now. One thing I do want to share—and it’s kind of sad, but probably no one will care; so maybe it’s not that sad—is that the data structure I referenced at the very end of my last post, while brilliant, cannot be realized to its true potential in C# (and possibly in Java either, though Java implementations are out there) because array allocations are themselves O(N) in C# due to the rule that every element must be initialized upon instantiation. (Part of the promise of the paper was for a data structure with guaranteed O(1) insertions; well, that might be possible in a language like C++, but not in C#. I measured it.)

I know, really sad, right? It’s OK; don’t cry. It’ll be all right.

This was a weak post. The next one will be sweet, I swear!

Do you want 100 dollars today, or 1 cent today, 2 cents tomorrow, 4 in two days… OR: The difference between arrays and linked lists

I realize that title’s a little ridiculous; let me explain.

First of all, for those of you who perhaps aren’t computer programmers (or even who are, but who couldn’t care less about data structures), the below is a diagram depicting an array.

A diagram depicting an array

Nothing to see here. Just your average everyday array.

An array consists of a bunch of elements all right next to each other in a computer’s memory. It’s very simple, and very “easy” for the computer to deal with (it only needs to know where that first element is, and how many elements there are total—sort of like, if I know where you live, then I also know where your neighbor lives: right next to you!). But there’s one serious limitation with arrays: they’re fixed in size, meaning, once you’ve created one (allocated it), you can’t change how many elements it holds.

In contrast, the following is a diagram depicting a linked list.

A depiction of a linked list

Look at me, I'm a zany linked list!

A linked list offers a significant benefit over an array: elements can be added, removed, and inserted anywhere in the list, all in constant time. This is because a linked list “node” consists not only of its value (the element itself); it also includes a pointer to the next node in the list (and possibly also a pointer to the previous node, in which case we have a doubly-linked list).

Now here’s a curious puzzle. Suppose you’re a programmer and you have need of a collection of, say, integers. You know you’re going to be adding many values to this collection, but you can’t predict how many (this might sound contrived, but it’s a realistic scenario: consider real-time data analysis, where data points are provided by some external source and it’s the software’s responsibility to analyze the data as it’s received). Do you use an array? Keep in mind that since arrays are fixed in size, this means that if you need more elements than the array can hold you’ll need to create a new array, copy all your elements over, and repeat this process whenever you reach the capacity of the current array. On the other hand, what about a linked list? Adding to the end of a linked list is O(1), which makes this the better choice, right?

This reminds me, believe it or not, of a common mathematical experiment that we’re all probably familiar with. I seem to remember my mom posing this question to me when I was a kid; or maybe it was my dad. In any case, the experiment goes like this:

You have two choices. The first choice is that I give you $100 right now. The second is that I give you a penny right now, two pennies tomorrow, four pennies the next day, and I keep doubling the number of pennies every day for a month. Which do you pick?

I’m guessing that any adult reading this knows the right answer; in fact, I’d be surprised if any adult could see that question and not feel that the answer is obvious. But to a child, it is not obvious at all. In fact when you actually learn how much money you’d have after a month in the second case, as a child (and even as an adult, honestly), it’s pretty shocking! (For the record, on a 30-day month that would be $10,737,418.23.)

A similar illustration of this phenomenon (basically, exponential growth) that I’ve heard used before is that of folding a piece of paper in half enough times for the resulting folded-up page to reach the moon. (Guess how many folds it takes? 42—that’s it!)

What does this have to do with linked lists? Actually, you know, it’s funny: the real parallel is actually a backwards one. So, to make it less confusing, let me pose a variation on the above experiment:

You have two choices. The first choice is that you owe me 10 cents per day, every day, for the rest of your life. The second is that in one week you will owe me 10 cents, then in a month you’ll owe me 20, then in six months we’ll make it 40, then 80 after a year, and from there we’ll just keep doubling the amount of time that goes by before you owe me any more money.

The above is admittedly imprecise (I didn’t feel like doing the conversions from days to months & years in my head); but it captures the basic gist of what I’m trying to say. In the first case, let’s say you live 80 years; that’s nearly $3,000! On the other hand, let’s look at the second case. Supposing again that you lived 80 years, and let’s just say the first year you owed $2 (to make it easy). Your last payment would be when you’re 64 years old (even if you made it past 80, I doubt you’d make it to 128!) and it would be for a whopping $128.

In other words, taking the guaranteed constant cost (allocating linked list nodes) over the performance hit that occurs less and less frequently (creating a new array) is really not the best choice.

This is all related to a peculiar psychological phenomenon that Ed Rosenthal discusses in his excellent book The Era of Choice. In one chapter (I don’t recall which) he describes the following interesting experiment. Members of a study group are given two options: accept a $100 guaranteed reward, or take a 50/50 chance of receiving either $500 or nothing.

Which would you choose? As it turns out, the vast majority of participants in the study decided they’d choose the guaranteed $100.

OK, interesting. Before analyzing whether or not that’s smart, though, I should mention that Rosenthal then goes on to share that there was a second part to the study. It was essentially the exact same question, only backwards: instead of a guaranteed versus probabilistic gain, participants were asked to choose between a guaranteed and probabilistic loss. So, the options were: pay a guaranteed $100 fine, or take a 50/50 chance of either having to pay $500 or nothing.

Which would you choose this time? Same answer?

Here’s the statistical perspective. In the first case, the correct choice is to go with the 50/50 chance of receiving $500. Why? Because your expected outcome is $250 ((50% * $0) + (50% * $500) = $250). In the case of a guaranteed $100, your expected outcome is, obviously, $100. On the other hand, in the second part of the experiment, the correct choice is to go with the guaranteed $100 loss, for the exact same reason: your expected loss is $100, while in the 50/50 case your expected loss would be $250. (Yet in the experiment, the vast majority of participants opted for the 50/50 choice in the second part. So they made the wrong choice both times!)

This is a little difficult for some people to understand at first: How can your “expected value” be $250 when that’s not even a possibility? This is the same sort of objection people make to the seemingly nonsensical statistic sometimes spouted that “The average American family has 2.5 children” (I have no idea whether or not that’s accurate; I just know I’ve heard it before). The error in reasoning here is conflating an individual case with the average of all cases. Here’s what I mean.

Let’s say, rather than participating in the abovementioned experiment once, you were given the opportunity to participate in it a thousand times. And let’s say, as with most of the participants in the original study, your gut told you to go with the guaranteed $100 reward each time. Then your expected reward would be $100 per trial, for a total of $100,000. Sounds pretty nice, right? But what if you went the other way? If you opted for the 50/50 choice each time, then half the time (roughly) you would get $500, and half the time (roughly) you would get nothing. Averaged out over 1,000 trials, you could expect to be up $250,000—that’s $150,000 better than the other option!

Get the idea? We make many choices in our life, and often it is our nature to want to secure the guaranteed something, whatever it may be. But if we apply this strategy everywhere in our lives, we may ultimately be experiencing a suboptimal outcome, because we are continuously making poor statistical choices due to our fear of risk.

This seems all the more crystal clear when we imagine the 1,000 trials scenario in the context of the second part of the experiment (choosing between losses). Over 1,000 trials, opting for the 50/50 chance of loss each time, you could expect to be in debt $250,000 by the end of the experiment. Taking the guaranteed $100 route would put you in the hole only $100,000 (and if you participated in both 1,000-trial runs, this means the correct choice in both cases would eventually put you ahead $150,000, while making the wrong choice in both would put you behind $150,000).

Interestingly, I think software developers (being humans themselves) are susceptible to making similarly poor but understandably instinctive choices in their software design. “I don’t want the amortized statistical advantage of a resizable array; I’ll take the guaranteed performance win of a linked list for its O(1) appends!” Even though in the long run that will set me back, providing a small expected benefit in comparison to the greater expected benefit of the array.

Of course, I’m not saying that either approach is really the best choice.

But I’m also not really just talking about arrays and linked lists, here. Linked lists are the scapegoat of this post; in fact I’m talking about looking at the bigger picture and making well-informed choices based on overall expected benefit, rather than stressing out over edge cases. And that applies to you too, non-developers!

That is one crazy Func! I think I’ll call it CrazyFunc.

Man, I am really having a creative dry spell. I have seriously started like three different blog posts at different points throughout the day today and gotten stuck each time! But in the interest of not totally failing in my goal of a blog post every day in 2011, I guess I’ll reveal the solution to my latest pop quiz since I did, after all, present it a full ten days ago.

The challenge, in case you don’t remember, was this:

Can you define a function in C# [I’m not sure why I saw the need to specify the language. -Dan] that takes a different function as input, and returns an instance of a delegate that will execute the input function and return an instance of itself?

The funny thing about this challenge is… technically, I guess my honest answer would have to be no; but that’s only because I didn’t phrase the question particularly well. What I can do, however, is write a function that will define such a function for me. I know—crazy, right? The trick, first of all, is to realize that this is simply not possible using one of the built-in delegate types from the BCL. A Func<T>, for example, cannot return a Func<T>; it has to return a T (you could try going with Func<Func<T>>, but it should be obvious after only a second or two that that’s not any better).

How do we get around this? We define our own delegate type that does return an instance of itself. It’s kinda like a Func<T>… only crazier. Let’s call it a CrazyFunc<T>.

delegate CrazyFunc<T> CrazyFunc<T>(T input);

That’s right, I did just go there.

OK, so how can we actually use this? Recall that the syntax proposed as part of the challenge would look something like this:

crazyFunc("What")("the")("heck?");

How do we actually write out an expression that will act in this way? Can you believe that the answer is using recursion?

Mwahahaha, yes. Using recursion.

static CrazyFunc<T> CreateCrazyFunc<T>(Action<T> sideEffects)
{
    return input =>
    {
        sideEffects(input);
        return CreateCrazyFunc(sideEffects);
    }
}

And voilà! Now we can write code like this:

CrazyFunc<string> crazyFunc = CreateCrazyFunc<string>(Console.WriteLine);
crazyFunc("Aaa")("aa")("a")("aAAaa")("aag")("ggGGGgh")("h")("hhh!!!")("!");

Output:

Aaa
aa
a
aAAaa
aag
ggGGGgh
h
hhh!!!
!

(Full disclosure: I originally encountered this problem because the user Juliet posted it—apparently for her own amusement—on Stack Overflow. I just liked it a lot, so I figured I’d share it with the world. And yes, I did post an answer of my own even though it wasn’t F#.)

I’m going to be a ThoughtWorker!

Well, it’s official. I’ve accepted an offer from ThoughtWorks and will be joining the company in mid-February.

I’ve notified my employer, which makes it extra official.

ThoughtWorks is a strong force for agile development in the software industry and is highly regarded both for its forward-thinking philosophy as a company and its top-level talent (or so I’m told).

What does this mean for this blog? Hopefully, nothing bad! It does seem quite likely that my focus will begin to drift further and further from .NET specifically and towards more general software-related topics. (Then again, that’s what I said when I started at CMU, and look at all my “Programming”-tagged posts over the last several months. Nothing about Ruby! It’s all .NET! But I think actually working professionally in lots of new technologies will be a lot more immersive than just studying them academically. So this time I’m more confident in my prediction.)

On an unrelated note, my wife and I saw The King’s Speech last night; I highly recommend it.

The power of positive thinking

Read this Snopes article and tell me you don’t want to be George Dantzig.

In case you don’t feel like reading the whole article, the gist of it is this: a graduate math student wandered into class late one day, noted a couple of math problems written on the board, mistook them for homework, solved one of them and turned it in a few days “late” with apologies to the professor. Six weeks later his professor came knocking on his door to inform him that he had solved an unsolved math problem (the problems on the board had not been homework) and that he wanted to publish the student’s homework as a research paper.

Seriously, this story suggests to me that all teachers should pose unsolved problems to their students without revealing that the problems are unsolved on a regular basis. I wonder how often something like this could happen?

Exceptions for controlling program flow: take it easy on your poor CPU

A lot of developers don’t see what’s so bad about using exceptions to control program flow.

I have weighed in on this question, and my general opinion is that the main problem—from a practical standpoint—is one of performance. Yes, it’s an implementation detail and varies by programming language/framework; but I would wager that 9 times out of 10, writing code that is not unlikely to throw an exception (what Eric Lippert calls a “boneheaded exception”) and catching that exception is less efficient than dealing with the problem beforehand—nipping it in the bud, if you will—sans exception.

But this blog is titled “The Philosopher Developer,” after all; so now I’m going to offer my philosophical explanation. And, as has become customary for my blog posts, I will do so in the form of an analogy.

The characters:

  • You: the developer
  • Servant: the computer, running your software
  • Your Instructions: your software (obviously)
  • The Store: the operating environment

You have a servant whom you use to accomplish various small tasks for you—running errands, performing simple work around the house, etc. (Sounds nice, doesn’t it?)

One day you decide you want something cold to drink, but there’s nothing in your fridge. So you give your servant some cash and send him to the grocery store with these instructions: “Buy me a Coke.”

Now, let’s say your servant is extremely literal and incapable of making his own decisions. He can only follow instructions. You realize this, and so in your mind you think: If the store doesn’t have any Coke, maybe I’d be fine with a Pepsi instead. As a last resort, I could even go for some orange juice. (This is of course supposing you know for a fact that the store will always have orange juice.) Your plan is, if your servant can’t find a Coke, he will have a panic attack and come back to you very distressed. Then you will let him know your alternate choice.

This is what it’s like to throw and catch exceptions: you’re accounting for disastrous scenarios without letting your servant “in” on the details of your backup plan. Consider the energy your servant has wasted going to the store, trying to execute your instructions, failing, and then coming back to you to report on his failure, when all along you knew exactly what you wanted to do if he were unsuccessful. You have anticipated the disaster and deliberately allowed it to happen as a means of controlling your servant’s workflow.

The alternative, obviously, would have been to let your servant know of your 2nd and 3rd choices in advance. This would have allowed him to make only one trip to the grocery store, where he could have looked for Coke, Pepsi, and orange juice—in that order—without any panic attacks or distress, and most importantly without needing to return to you empty-handed. The disaster (the exception) thus could have been prevented altogether.

When you think of it this way, it really is just the sensible thing to do all-around to try and avoid exceptions when it’s possible to do so.

Overwhelmed by choice

Take a look at the following beautiful user interface:

A terrible UI filled with way too many options

Be sure you make an optimal decision!

(Don’t even try to figure out what this form is collecting input for.)

What’s wrong with that interface? I ask as if it isn’t obvious: there are way too many choices! Seriously, how can you expect a user to make a sensible decision when there are this many options to choose from?

This is just a basic principle of good user interface design: simplify decision-making for the user by limiting choice. It sounds kind of dark and 1984-esque, but in fact it is quite a humane approach. (Also, to clarify: I’m not suggesting removing choices, but rather making them “opt-in”; i.e., if the user wants to change/enable something, he/she can do so by seeking out that choice rather than being bombarded with countless options up front.)

Here’s the classic example of what I’m talking about:

The Google UI

Seriously, how easy is that? You are only asked to make one choice: what do you want to search for? Notice there’s an “Advanced search” link beside the main input field, allowing you to seek more choices if you want them.

Have you ever found yourself confronted with a set of choices, and it stresses you out? I know I have. What’s really strange is that often, in these situations, you would be objectively worse off without the choice; yet you would be happier if your options were fewer.

Take college, for example. Let’s say you graduated from high school at the top of your class. You applied to Harvard and Princeton and got into both. Sweet, right? Except no, it’s not so sweet; now you have to make the agonizing choice of which school you want to attend, meanwhile recognizing that whatever choice you make, there’s always going to be this nagging question in your mind: What if I had picked the other school? Did I make the right choice?

On the other hand, if only one school accepted you, then your choice is made for you. No agonizing decision-making for you to worry about! Problem solved.

Here’s what I think happens in our brains when we’re making choices: we do our best to consider all possible options, but if the number of options to consider gets out of hand, we get “overloaded” and end up making a choice we know may not be optimal. This is a matter of practicality as we don’t live forever and thus don’t have the time or mental energy to consider every option available to us.

You C.S. folks will appreciate this: what’s the algorithmic complexity of this decision-making process?

static void ConsiderDecision(Option decision, OptionSet options)
{
    if (options.IsSingleOption)
    {
        ConsiderSimpleChoice(decision, options.Single());
        return;
    }

    foreach (Option option in options)
    {
        ConsiderDecision(option.RemainingOptions);
    }
}

Yeah, that’s right: it’s large (O(n!)—that’s the same as the Travelling Salesman Problem!). With each additional choice we’re given, the mental strain we endure in striving to choose among the choices available to us increases factorially, not linearly. At a certain point our brains recognize the impossibility of considering every individual option (much like a chess master recognizes that he or she can’t possibly consider every individual sequence of moves leading to a checkmate), and this causes us to actually panic a little bit. We make a choice we know we can’t be 100% confident in and just hope for the best.

Not to get too melancholy, but this all reminds me of an excellent song I sometimes enjoy listening to by The Swoon, a very obscure band (seriously, I don’t even know if you can find them on Google). The track is called “Speak Soft”; it’s a deeply sad but beautiful (in my opinion) examination of the state of being lonely and not knowing what to do with your life. In particular the lyrics I have in mind are from the song’s last verse:

Houdini closed himself inside of a box.
He didn’t have a trick to spring the lock.
Off the stage the people watched the clock.
Prison could be a nice place to live,
the bars on the window like bars on a crib.
Freedom is the least desired gift to give.

Moral of the story for application developers: next time you’re designing a user interface, don’t get carried away with the options. It isn’t nice to your poor users’ brains.

Moral of the story for everyone: recognize your own decision-making limitations and don’t seek out options that don’t truly matter to you. Otherwise, you’re only doing yourself a disservice.

A brief message from the school of .NET apologetics

Someone far more experienced and knowledgeable than myself recently made the following remark in a personal correspondence:

Too bad you’re working in .NET. There are TONS of tools out there for the Open Source world. And they’re all free and well-documented, with huge communities of helpful users to answer any question you might have.

To be clear: I don’t have the perspective on this issue that years in the industry naturally bring. My view of the software industry is quite limited. So yes, I am about to offer a quick defense of .NET, and I will be the first to concede that it is biased and perhaps naïve. But I also see this as a two-sided coin: approaching an issue with a boatload of experience to draw on sometimes colors one’s perspective in a way that might not actually be entirely fair to that specific issue.

Let me give an analogy to hopefully explain what I mean, and then I’ll move ahead to my main point.

Imagine you’re a kid, and there’s a family—we’ll call them the Smiths—that lives in your neighborhood. They are a very large family; let’s say a dozen children, all mostly grown. Your mom and dad really dislike the Smiths, based on a substantial history of bad experiences with them. The couple is manipulative and conniving, not to mention greedy and unprincipled. Most of the children are likewise disrespectful, poorly behaved, and just downright mean.

Now let’s say you actually don’t know any member of the Smiths. You’ve never had any occasion to interact with the parents, and most of the children are significantly older than you. But there is one kid in your grade—Billy Smith, the youngest sibling—whom you meet one day during class. He seems like a pretty decent guy, actually. You crack some jokes together, get along fairly well, and you eventually get the impression that Billy is really quite a nice person. You decide you’d like to be friends.

When you go home and tell your mom and dad about this, they immediately disapprove. “Billy Smith?” they ask. “No way. Those Smiths are all a bunch of hooligans, always up to no good. It’s a shame you want to be friends with Billy Smith.”

The point of this analogy should be obvious. It might be true in general that, for example, Microsoft as a corporation has many flaws (not that I have any reason to bash the company personally—but I must acknowledge that they have lots of detractors, and that generally happens for a reason); but this does not necessarily reflect on the quality of every existing Microsoft product. I propose that .NET is like Billy Smith from the analogy; while I can understand why someone who is jaded towards Microsoft with no doubt plenty of good reasons would have no interest in it, I also think that I have valid reasons to respect the framework, and that it is probably easier for me to recognize its merits since I lack the background experience that might cause me to be similarly jaded.

With that said, what follows is my attempt to concisely respond to the main arguments typically made against .NET, both from the statement I quoted above—which I believe fairly represents the viewpoint of a very large portion of software developers who shun .NET—and elsewhere.


Objection: From the quote above:

There are TONS of tools out there for the Open Source world.

Response: The open source world and the .NET world are not mutually exclusive; there are plenty of resources (OSS and otherwise) for .NET as well. Note that the source code for the .NET base class library (yes, Microsoft’s version) is actually freely downloadable, though the code for Microsoft’s CLR (a proprietary implementation of the CLI, standardized in ECMA-335) is not. There is, however, an open-source implementation of the CLI called Mono; and this implementation enables running CLI programs (a.k.a., what we commonly refer to as “.NET programs”) on plenty of non-Windows platforms. Also, plenty of open source .NET applications and toolkits exist; for examples, see CodePlex or go to SourceForge and search for projects written in C# (there are currently over 13,000 of them).


Objection: Again, from the quote above:

And they’re all free and well-documented, with huge communities of helpful users to answer any question you might have.

I know this is just one example, but there is a very huge community of helpful users that is heavily slanted towards .NET: Stack Overflow (though the site is actually language-agnostic in principle, and plenty of non-Microsoft technologies have lots of users on the site, there’s no denying that the site’s largest demographic comes from the .NET community), which also happens to be one of the top 500 sites on the internet.


Objection: .NET is owned by Microsoft, which means the company has complete control over the future of the framework.

Response: I am by no means an expert on this topic, so take my response here with a grain of salt. But I don’t think this is really accurate. Consider the fact that both the CLI and C# itself are standardized in ECMA-335 and ECMA-334, respectively (to be fair, the latest versions of C# are not standardized, unless I’m mistaken). And remember that Mono provides a cross-platform implementation of the CLI, which means that even if Microsoft decided to “pull the plug” on .NET, there would still be a totally feasible way to build and deploy CLI applications written in C# to all platforms, including Windows (unless Microsoft could somehow refuse to allow Mono to run on Windows, which I suppose is at least feasible—but that would seem disastrously harmful for them). Porting existing Windows-only .NET apps to Mono would be, in most cases, very possible, if not downright trivial (in many cases, no work would be needed at all).


Objection: .NET is just a Java rip-off.

Response: This one I’m less qualified to respond to. All I can do is point out that Java is getting old at this point, and with age comes a certain degree of stagnation. I’m not saying that Java is not a stable, robust, and very valuable technology within the software industry; but from my limited knowledge, I perceive C# to be in many cases a significant improvement over Java (see here for a detailed comparison; note C#’s unified type system, mechanism for defining custom value types, support for passing parameters by reference, superior generics implementation—e.g., better constraints, no boxing for generics with value type parameters—and superior support for functional programming constructs, just to name a few). And I don’t think I’m alone in feeling this way: take a look at the April 2010 Technology Radar document released by Thoughtworks (a well-respected company in the industry, certainly not a Microsoft puppet), in which C# 4.0 is given a rating of Adopt while “Java Language end of life” is rated Assess. The company has this to say about Java as a language (note that as a platform, the JVM is still regarded as vital within the software industry):

With the increase in number of languages available on the JVM, we expect enterprises to begin to assess the suitability of reducing the amount of Java specific code developed in their enterprise applications in favor of these newer languages.


Anyway, I just think a lot of people don’t give .NET a fair shake. And while I understand why this is—just as I could understand why the parents in the above story would be skeptical of anyone in the Smith family—I still believe it deserves some consideration.

By the way, I am very interested in receiving feedback on this post! I am certainly not a “disciple” of .NET (I really do enjoy Ruby, though I am a novice at it; and I am also quite interested in learning other languages completely unrelated to .NET). If I have made any inaccurate or misleading claims above, please let me know and I will do my best to fix/clarify/resolve them.