Who’s Afraid of Functional Programming?

Joel Spolsky just published a great (and very brief) explanation of functional programming. There’s also a podcast of Berkeley’s CS 61A SICP course from Spring ’06 that I found — the first few lectures on functional programming are really worth your time. And finally, for a rambling, evening-discussion style explanation of functional programming, complete with historical anecdotes, there’s Functional Programming for the Rest of Us. [You might want to save that one, and come back when you have some time…but do come back to it.]

Joel’s article got me thinking. I’m not really working on applications (at work or for fun) that really need massive concurrency, so that benefit of functional programming never swayed me much. Most of the uses I see for functional programming are simple things, like the selector filters I wrote about in Hacking the Browser’s DOM for Fun…given an array, use a function to specify which elements you’re interested in. Just like Ruby’s find and find_all methods, and Java’s FileFilter and FileNameFilter classes. It’s like I’m using a Maserati just to commute. Well, The Little Schemer is on my wishlist…it’ll be fun to try the examples in both Scheme and Ruby. Maybe even JavaScript.

Now, I went to a Java School, so I only heard about functional programming, LISP, Scheme, Ruby, and all these strange beasts once I started teaching myself off the internet. No one at any of my jobs ever mentioned them. How is it that our field can have such a rich heritage, and almost no one knows about it? Ask the person in the next cube over whether they’ve ever heard of functional programming, whether they know what it is, or can explain it to you. I guess 80% of you get blank looks. [This offer void at telcos and good universities.]

Quite a few people call that a competitive advantage. And they’re right — having scarce information puts you ahead of those without that information. But it seems short-sighted to me to gloat over that temporary advantage, when you’re missing the contributions that the people in the dark could be making.

UPDATE: It occurs to me that this might sound like I think functional programming should be used for everything — hardly the case, especially given my lack of experience with it, which I readily admit to. My point is, why don’t more of us know about it? Why isn’t it taught in more universities, or talked about at work?

Software and Belief

I spent some good time last week chasing down a bug in our J2EE app. It turned out that I was using the wrong session attribute name. Actually, I was using a correct name, because we store the same object in session twice, with different names (let’s just not talk about that), and in some situations, the attribute name I used hadn’t been populated yet. When I wrote my code, I thought I knew what the attribute name was (and I was sort of right), but I didn’t double-check.

I enjoy finding my mistakes. I don’t mean that as, “better than QA finding them for me,” although I guess that’s true, too. I think I like finding them because bugs often stem from incorrect beliefs, and fixing bugs is a chance to revise your beliefs. Software is a belief-intensive activity: you believe the web server is configured a certain way, you believe your cookie is being deleted at a certain point in the log-out process. We believe these things because we don’t want to verify them, and as long as things work as we expect them to, we’re happy.

The fun starts when our beliefs don’t match our experience. Suddenly you’re faced with hard evidence that what you believe is wrong. You can choose to ignore your experience (foolish and pointless), or take time to investigate, and replace your belief with knowledge. What’s nice about software is that you can usually do this just by looking at source code or configuration files. Imagine how much easier science would be if we could understand a phenomenon by simply looking at its source code.

Maybe we rely so much on belief in software because it’s not our job to understand everything, but to make things work a certain way. Understanding is great, but working software pays the bills. Of course we have to understand lots of things to do our job, but since that’s not our goal, we abstract away anything we can. I mean, isn’t abstraction one of the core ideas of computing and programming? Building up layers of abstraction is like asking the programmer to sustain belief in the lower layers.

This shows up in lots of situations: developers arguing about whose code is causing buggy behavior, developers arguing about exactly what a tool does under the hood, developers arguing about which redundant server they’re running on… You can argue about anything you believe in, but you can’t argue for long about facts. Argument indicates conflicting perspectives or opinions, which mostly boils down to belief.

I guess the lesson here is to remember this, and not to be too certain in your (software) beliefs. If you find yourself in an argument, try to understand the beliefs on each side, and at least acknowledge that you’re stating a belief. Remember Voltaire: “Doubt is not a pleasant condition, but certainty is absurd.”

PS: Terry Pratchett’s The Bromeliad Trilogy (Truckers, Diggers, and Wings) is a great story about when belief and experience collide.

Talking about Software and the Nac Mac Feegle

I was talking with a co-worker about how people imagine obstacles that aren’t really there. You ask them to do something perfectly reasonable, and they tell you it’s impossible, because of these imaginary obstacles. You have to first show them the obstacle isn’t there, and then things can proceed. “Oh,” I said, “it’s just like First Sight. Have you ever read the Wee Free Men?” This is a first for me — talking about software, communication, going from idea to implementation…and referencing a children’s book about small blue fighting Scotsmen known as the Nac Mac Feegle, or Wee Free Men. It’s one of Terry Pratchett’s Discworld novels.

The idea of First Sight is that you see what’s actually there, instead of seeing only what you want to see. It goes along with Third Thoughts — First Thoughts are regular thoughts, Second Thoughts are thoughts about the first thoughts, and Third Thoughts are thoughts about your thinking. Kind of meta-thoughts.

If you haven’t read the Wee Free Men, it’s a short read, and a lot of fun. It has so far one sequel, A Hat Full of Sky.

“Nac Mac Feegle! The Wee Free Men! Nae King! Nae quin! Nae Laird! Nae master! We willna’ be fooled agin!”

Separation of concerns

In some fun work-time conversation today, my friend Tom & I discussed Computerized Numerical Control (CNC) systems, and how they’re being used to turn 3D models of an object into physical sculptures. You can 3D-scan an object, tweak the model if you want, ship the digital file to one of these companies, and they ship you back a physical copy of it. There areother companies that similarly create resin sculptures from digital models.

This made me think of cafepress, where they’ve separated the business of creating and ordering merchandise like T-shirts, mugs, and clocks, from the business of actually producing, selling, and shipping them. You find it used by many humor sites to earn some money off short-lived, sudden popularity (I’m thinking SaveToby.com and Lions vs. 40 midgets): make a quick .jpeg with your site’s name, ship it to cafepress, click “T-shirt, clock”, and link to your new online store right from your website.

This all ties back to (have you guessed yet?) separation of concerns. If you have something that does two things, break it into two things. You find this over and over in software. Components/objects/commands with clean interfaces can be easily re-used by other components/objects/commands, in ways that their creators didn’t envision. That’s one of the touted benefits of service-oriented architectures, as well. It’s what ESR calls Unix’s Rule of Modularity. It’s present in many of the building kits you can buy for children (no matter how
hold they are): Legos, K’nex, the old Construx

CNC scultpors and modular software are just a grown-up’s building toys.

Remove clutter, leave clarity

I just heard from a friend & co-worker about a book called Why Business People Speak like Idiots, and a book Amazon offers with it, On Bullshit. These both sound pretty interesting. They remind me of, in no particular order, Orwell’s “On Polictics and the English Language”, Edward Tufte’s work, and Strunk’s Elements of Style: all efforts to remove clutter, and leave clarity. I think I’ll buy one and see for myself.

Hang on to that idea…

I was talking with a co-worker today, an excellent developer, about how ideas fade, while the changes they cause remain.

We were recently on a team whose task was to gather a list of software engineering best practices; teams in our department would rate themselves against it, and look for ways to improve performance. The biggest impact the list seems to have had is to add action items to project plans, and the miserable phrase “…in accordance with the software engineering best practices.”

When we considered each software practice, we clarified what it entailed. We discussed how it affects a team. We explored what a team is like without it. We weighed practice against practice. When we released the first version of the list to management, there was already significant time spent on it, and much thought, discussion, and healthy argument behind it.

The project began as a chance to change our work environment; it ended as more meaningless process. The thing that’s lost in that transition is the original idea, and its context.

The same pattern is repeated in different contexts: religions begin as inspiration, and end as commandments. Laws begin as an idea or loose social consensus, and end as rigid edicts, filled with loop-holes. Software designs begin simple and elegant, and wind up crufty and confusing.

It’s even visible in this old joke (http://uufn.org/body_uufn_reflections.html#dec16):

A young girl is watching her mother make a roast. The mother cuts off the two ends of the roast, puts the rest in the pan and pops it in the oven. “Mom,” the girl asks, “how come when we make a roast, we cut off the ends before we cook it?” The mother replies, “I don’t know; that’s the way I’ve always done it. Let’s ask grandma.” Grandma is in the sitting room, so mother and daughter ask her why they cut off the ends before cooking the roast. Grandma’s reply: “My mother did it that way and so have I. You’ll have to ask great Gramma”. The next day, they all get in the car, daughter, mother, and grandma, and go to see great Gramma. “Great Gramma,” the young girl asks, “how come when we cook a roast, we cut off the ends before putting it in the oven.” “Well,” great Gramma replies, “my roasting pan isn’t big enough for a whole roast.”

I wish I had some tip to offer for getting around this problem, but I don’t. All I can say is, try to notice when the original idea has been lost, and see if you can recover it.

Letting the ego get in the way

One of Robert Glass’ Fallacies of Software Engineering is that “Programming can and should be egoless.” People think that ego gets in the way of writing good software — you need to be cool, calm, and collected. No room for ego here, thanks, we’re detached professionals. Glass, though, argues that ego is part of what makes a programmer good — it provides incentive, personal attachment, motivation. Your software reflects on you, so make it good. This is all fine, and makes some sense to me. However, I want to talk about another aspect of ego that I think is less discussed, and more of a problem.

Maybe I should start with an illustration. In a recent meeting, I was trying to understand the new requirement the customer was asking for. The BA, having a great business background but very little IT background, already understood the problem — and solved it for me. It took some polite work for me to find out for myself what the new requirement was about, so I could design an appropriate solution. Part of the new requirement meant that I’d have to interface with an existing enterprise user profile system that stores user groups in a hierarchy. The BA couldn’t understand why it had to be in a hierarchy — she kept saying, “look, can’t we just get a list of the users, and use that? It just seems easier to me.”

I think this illustrates a common problem. Customers who have limited IT skills will insist that you use the solution they came up with. When you hear someone say, “I just see this in my head…why can’t we do that?”, you’re probably facing this.

If you’re building a system, you’re working with someone who understands the problem that system should solve. I’ll just call him the customer, although I think sometimes, this also applies to business analysts. Whoever it is, he also has an ego (he’s human, isn’t he?). He has an idea, at some level, how the system should solve the problem, and this is where the ego gets in the way.

Now, if you write software for a living, then you probably have more experience building systems than this person does. It’s what you do. This person does something else, by definition. You’re probably better able to imagine complex systems, deal with complex algorithms and data structures, and foresee the consequences of different design or solution decisions. Not because you’re smarter than the customer, but because, again, it’s what you do.

But when a customer has an idea for a system, it’s his idea. He’s thought some about this, and brought you his idea to be implemented. The more he’s thought about it, probably the firmer he is about the idea. If he identifies with it, if it’s “his idea,” you’ll have a hard time making him see any of its flaws. On the flip side, if you suggest a solution that he can’t readily understand, it can scare him away. You get that look that says “what kind of whack-o would want to deal with something that complex?”

I guess this is one of those situations where you can’t fix it, you can only deal with it. But understanding that customers and BAs may have this kind of attachment can provide a lot of calm in these situations, and calm is the path you want to take.


Here’s a review of Glass’ Facts and Fallacies of Software Engineering that I just found, but haven’t read yet.

Why do they think “doing it right” means taking longer?

Often when I suggest a better way to do things, some kind of process improvement, people say, “I think that’s an excellent idea. Unfortunately, we’re on a very aggressive schedule. In a perfect world, I think your idea would be a wonderful way to do things, but we just don’t have the time now.” This is said about all kinds of reviews (requirements, design, and code), prototyping, usability testing, building strategically, unit testing…

I think the crux of this misunderstanding is this: “doing it right” is harder than “doing it easy”, and “doing it right” is something people made up because they’re nit-picks. It’s almost like etiquette — “In a perfect world, I would set the table properly, but I just don’t have the time. Tonight, to save time, we’ll eat off paper plates with paper towels.”

The problem with this is that these practices are meant to save time, effort, and money. Skipping them hurts, not helps. Skipping them actually creates the problem you’re trying to solve when you skip them! I say these practices are as superfluous as a doctor sanitizing his hands before he operates on me.

“The interface IS the application”

“The interface is the application.” You hear that a lot when someone’s trying to remind you to consider the end-user’s perspective. It’s a good reminder, too — to the user, the inner workings of your application are probably about as interesting as the inner workings of a warehouse. N-tier? Components? Who cares? At the end of the day, if people don’t use your software, is it really any good?

It’s important, though, to remember how this statement is used: it’s a context-shifter, a facetious phrase intended to jar you into thinking — not a statement of fact. Accepting it as factual, and making decisions based on it, is like saying your skin is the entire organism, or that your car seat and dashboard are the whole car. I rarely think about the transmission (unless it’s not working), but that doesn’t mean that the radio moves the car.

Keeping to the vision

Maybe this echoes back to my days making music. I remember wondering whether a band needed to have a “leader,” a single member who wrote most of the words and music, and provided a psychological unity to the band. Gave it a personality.

I wonder whether software teams need the same thing. Typically, gathering software requirements is a bunch of people throwing details around. These people have different views of the system they’re designing, they understand the constraints on it differently, they have different goals and pressures on them. Seldom is there one person who makes all the decisions, and steers the overall project. Seldom is there any coherent vision of how things should work. Seldom is there any clarity. Is it any wonder that a lot of the software out there sucks?

Amazon.com is a good example of software with a cohesive usage paradigm. Users understand the entire process of buying things from Amazon. They understand shopping carts, check-out, addresses, coupons, and shipping. They understand how all these parts fit together. Granted, all of these concepts are carry-overs from real-world retail, so Amazon had it easy. In fact, anyone who builds software that models a real-world process has this part easy: email is so simple to use because it mirrors a real-world process.

However, this kind of clear understanding is lacking in many software systems. Making users understand the “components” of a software system (the shopping cart, the coupons, etc), and how they hang together to be useful, I think is the essence of usable software.

As I think more about this, I think that a lot of software models a real-world process, because a lot of software is about automating some real-world system. In these cases, maybe the trick is to recognize that you’re automating a real-world system.