Category Archives: Psychology

New post: Forests and treebuilders


New post: People are not resources

New post: People are not resources

New post: Your brain is a liability

New post: Your brain is a liability

I also totally forgot to link here to my previous new post, which is about a gem I wrote to change the way DataMapper handles errors: Unbreaking DataMapper

It’s very strange to defend a guess (and a bonus: yes, static constructors get called when using reflection)

First of all: I fully intend to respond to the questions a couple of readers raised in response to my recent post about agile development. The truth is that when I wrote it, I was fully aware that I was giving a very high-level view of what agile development is without providing a lot of explanation about how it’s actually implemented (hence the common concern, “So is agile really just about not planning anything?”—doesn’t sound right, does it?). But I’m not quite ready to delve into an in-depth response just yet! That will require some time, and I’ve been quite busy at work.

The only thing I have to write about today is that I find it very odd when people defend their positions, when their positions are clearly just guesses.

Here’s an example. This morning on Stack Overflow, a user asked if a type’s static constructor would be called upon field access using reflection. I thought it was a pretty good question, albeit fairly trivial to test (I don’t know why the user didn’t just check and see for himself).

The correct answer is yes, field access using reflection will call a type’s static constructor. This is verifiable, and I included example code in my answer.

Curiously, another user answered with this:

If the value is set in the static constructor, it’s ONLY set when first accessed which won’t include being accessed via reflection. It isn’t initialized at runtime automatically.

This is factually wrong. So apparently the user was just guessing at the correct answer.

OK, that happens. But then another user pointed out that the answer appeared to be incorrect, citing my answer, to which the user responded with another guess:

Actually I am correct. In Dan’s answer he uses typeof(TestClass) which in fact calls the Static Constructor.

Again, factually wrong! We can clearly see this with the following very short code:

class TestClass
    static TestClass()
        Console.WriteLine("I am in TestClass's static constructor.");

class Program
    public static void Main(string[] args)
        Type testClassType = typeof(TestClass);
        Console.WriteLine("OK, we're past the typeof part.");


OK, we're past the typeof part.

So the typeof operator does not cause a type’s static constructor to be invoked. Why would this user (1) state a guess as if it’s a fact, and then: (2) defend that guess with another guess, also wrong?

I would like to believe this is an isolated case, but actually I believe that people do this all the time. In fact I tend to think that a whole lot of the controversial issues that people discuss in our society are actually matters of fact which simply have not been definitively established yet (e.g., will government policy X or Y be more effective at reducing crime?).

The human inclination to vehemently defend guesses is one which I have to believe has served our species well in the past (I should hope so, otherwise I fail to see why we do it so readily). But I still just don’t quite get it.

Everything agile

I’ve mentioned probably about a hundred times that I have recently joined ThoughtWorks. What I haven’t done is discuss what ThoughtWorks, as a company, actually does.

So, the company is one of the driving forces for agile software development in the industry. We* provide consulting and training to organizations that want to adopt agile practices, and we also develop (“deliver”) software to organizations using an agile approach. (There is also ThoughtWorks Studios, a division of the company which develops products for agile development.)

What the heck is agile development?

I’m glad you asked!

The “traditional” way of developing software involves a lot of planning: establishing clearly defined requirements, drafting feature and technical specifications, designing test cases, prioritizing features, etc.—in this way, a traditional software development methodology such as RUP is highly focused on predicting the challenges and needs of a software project.

Picture of a man thinking, What will happen?

The “agile” way of developing software is in many ways a response to the faults of traditional processes that leaders within the software industry have observed over the past several decades. Agile development represents a fundamentally different approach to handling change within a software project: whereas a traditional approach attempts to prevent costly changes by anticipating requirements, an agile approach strives to minimize the negative impacts of changes by adapting quickly.

The fact that agile software development typically involves less up-front planning than traditional development has resulted in a lot of misinformation out there about what it means to be agile. Some detractors think of the term “agile” as meaning “unstructured”, “disorganized”, or even “risky”:

Men getting right to work

A better way of understanding what it means to be agile is this: agile is all about adapting to change. As I said, traditional practices tend to try and plan an entire software project from beginning to end. In this scenario, change is very expensive because, with so much effort invested in planning, every little change requires additional effort towards adjusting schedules, budgets, requirements, etc. The agile philosophy is that change is inevitable, and so rather than fight against it, it makes more sense to actually expect it (and adapt to it).

So you might think of “traditional” as representing a more predictive approach to software, and “agile” as representing a more adaptive approach.

The traditional-agile continuum

Now here’s the crucial part about what I just said: when you expect something to happen, it doesn’t make much sense to plan for it not to happen.

That might sound a little too… obvious (or even tautological). Let me put it differently, using an analogy.

Let’s say I go to the horse races. I’ve done a lot of research, and according to what I’ve learned, I strongly suspect that Seabiscuit is going to win. This actually isn’t much of a leap of faith on my part; Seabiscuit nearly always wins. So it just makes sense for me to assume, or anyway, to guess, that Seabiscuit will win again. And yet, when it comes time to place my money on a horse, suppose I put it on War Admiral instead.

This doesn’t make sense, right? I expect one thing to happen, and yet I act as if something else is going to happen.

A guy with an umbrella who thinks it will be sunny

It doesn’t make sense, and yet this is what traditional development is. Honestly, these days nobody honestly expects that changes won’t happen. They will happen, and they are generally anticipated with fear, because developers know just how much additional work they can cause. Agile development is about saying, “Hey, wait a minute, why are we building up these huge specifications and making these great big plans when we know it’s going to hurt like crazy when (not if, when) they change?”

To put it another way: agile development is about not spending $1,000,000 on a house where you know (or are relatively sure) an earthquake is going to hit and obliterate it sometime in the next few months.

I was just thinking about all this today during a team meeting with some of my classmates from school. We were working on an assignment consisting of roughly four parts, and repeatedly the conversation kept coming back to the “big picture”: how we were going to structure the assignment overall. There would be a lot of back-and-forth on areas that were subjective, unclear, or complex. There was much speculation and discussion of the unknowable (e.g., what would have happened if the plan had been X).

That is, there was a lot of mental gear-turning that was premature because the ideas being explored were not yet fully in focus.

Gears turning toward an unknown end

This is exactly what the agile approach is meant to avoid: wasting effort. After all, this is why change is so expensive in the first place: it undoes much of the work that has been done. If that work hadn’t been done in the first place, change wouldn’t seem so scary.

And so at this meeting, as we were going on and on in our conversation about this or that future aspect of our project, we eventually arrived at a simple realization: Why don’t we take an agile approach to this? Rather than plan the entire thing from start to finish, we can just get started on the parts we know about now, and as we move forward a clearer picture of those parts we don’t know as much about should start to form, making it easier for us to work out decisions in those areas when we come to them.

After this realization, I think the bigger realization came to me (well, came to me again—I have actually thought about this before, but it seems every now and then I’m reminded of it): an “agile methodology” is not just a software development methodology. It is really just a way of approaching any problem: by expecting change, and planning to adapt to it, rather than planning for one fixed set of circumstances and praying those circumstances don’t change.

*It feels a bit weird to say “we” to refer to a company I’ve just joined, but… gotta start some time, right?

The anthropomorphization of computers

You know what’s funny? A lot of my blog posts have to do with ideas of mine likening humans or human activities to computers or software phenomena in some way. But it is quite common to do the opposite: to view computers as being like people.

A computer with a faceWhen’s the last time you heard somebody say (or you yourself said), “This computer doesn’t like me”? Or “It doesn’t want to do this”? Or “It’s thinking”?

Not that this is necessarily specific to computers, of course. We do this with cars, TVs, microwaves, basically every mechanical and/or electrical thing in our lives. But I think computers are in a league of their own, probably because we view them as machines that do work our brains would normally do (e.g., perform calculations). This makes it seem sometimes almost as if they have wills. And that’s when we start to get a little ridiculous.

I recently developed a small program for my wife to use at work; it simplifies some of the mundane everyday stuff she otherwise had to spend an hour or so doing from time to time. It’s nothing special, but it’s useful enough that she shared it with a coworker.

A sick computerHe was apparently quite enthusiastic to start using the program… until it didn’t work on his computer. Hearing my wife tell the story, it was really quite sad to hear: he would watch her use it on her computer, then go back to his own computer, follow exactly the same steps, and nothing would happen. It wouldn’t do the work it was supposed to do.

My wife’s coworker then made a joke about how I must’ve engineered it specifically to work only on my wife’s computer. Or anyway, not on his computer.

Obviously, he was joking. We’re always joking. But as they say, behind every joke there’s a small piece of truth.

I think that, in all honesty, we all find it a little bit unnerving how human-like our computer friends can sometimes be: seemingly intelligent and able to perform complex tasks, yet at the same time fragile, temperamental, and easy to confuse.

So I’m going to fix the problem that’s keeping the software working on my wife’s coworker’s machine (I actually already know the cause; fortunately it’s about a 5-minute fix), if only to silence that uneasy feeling I know he has: Does my computer just not like me?

It’s not a person. It doesn’t “like” or “dislike” anything. I swear.

I think.

Seriously, it’s all about the puzzles

Sometimes I have to step back and take pause because I’m just too darned fascinated by the technical puzzle in front of me. This is a very common pitfall for a lot of software engineers, I think: losing sight of the larger problem (building a high-quality, functioning software application/library) in favor of some obscure smaller problem that seems more interesting.

I mean, it’s in my bones or something. Ever since being a kid I’ve always loved puzzles, logic problems, “mind twisters”—whatever you want to call them. I would always prefer those extra credit problems to the regular math homework; they were just more fun to think about, and more satisfying to solve.

This is just a personality trait, and I think it’s one that a lot of software engineers probably share. And it’s probably part of what has led so many of us to become engineers in the first place, as well as what makes us good at what we do (if we are, in fact, good at what we do!). But as I’ve said, it can also be a curse. It can seriously hinder your productivity when instead of fixing bugs or building new features you spend all your time thinking about the most efficient algorithm for, say, comparing two sequences to see if they contain the same elements.

And that is precisely what I’ve been spending too much time thinking about this morning.

Incidentally, can you think of a better way than this?

  1. Create a hash table from the first sequence, where the element is the key and its number of occurrences is the value.
  2. Scan the second sequence:
    1. If an element is not in the hash table, return False.
    2. If an element is in the hash table, decrement its value and return False if this falls below zero
  3. Check whether all values in the hash table are now zero.

I definitely need to work on this.

[This ending left intentionally ambiguous]