New Haven Ruby: First Thursday, Third Wednesday

The New Haven ruby group is gonna start building some rhythm, meeting twice every month, on the first Thursday and the third Wednesday. Even months (June, August, October) are hack-nights; odd months are social nights.

We had our first one last Thursday, at the SeeClickFix offices, and had a great turn out – about 15 people! Even Denis came out to join us. We were hacking on web apps for coordinating tasks, on ruby for reformatting other ruby, and some of us were just discovering programming for the first time.

Our next one is Wednesday, June 20th, and will again be at SeeClickFix, where free parking is just around the corner, and good pizza delivers. New Haven’s newest hackerspace, MakeHaven, is also around the corner, and there’s talk of doing a visit at some point. I’ll be there, probably hacking on an app for printing fliers for user groups, or an IRC bot for the group, or a regular expression parser, or some Project Euler problems.

Hope to see you there!

Out of Love with Active Record

(I’m a new-comer to Rails. When I first found Ruby, and Rails, I liked the Ruby better. And I never found many Rails jobs near home anyway. So for years, Ruby flavored my C#, and C# is where I learned, among other things, to persist my domain aggregates with NHibernate. Now I’m a card-carrying Rails jobber, which is great, because I play with Ruby all day. And the Rails community is discovering domain-driven design, and ORMs…)

Steve Klabnik just posted about resisting the urge to factor your models into behavior-in-a-mixin and dumb-persistence-with-active-record. He nails it when he says:

Whenever we refactor, we have to consider what we’re using to evaluate that our refactoring has been successful. For me, the default is complexity. That is, any refactoring I’m doing is trying to reduce complexity… One good way that I think about complexity on an individual object level [is its] ‘attack surface.’ We call this ‘encapsulation’ in object oriented software design.

If you learn only one thing from his post, let it be that “mixins do not really reduce the complexity of your objects.” Greg Brown threw me when he said that mixins are just another form of inheritance, and I think he was getting at the same thing.

Steve’s suggestion for separating persistence and behavior is to – duh, once you see it – separate them into different classes: a Post and a PostMapper, or a Post and a PostRepository. When I used C# and NHibernate, we loaded our Posts from the PostRepository, which used our PostMapper for data access. (Actually, our PostMapper was an XML mapping file.) You might call that overkill, but in a legacy app, it was nice to sheetrock our repositories over all the different data access technologies we’d acquired over the years, from the shiny new ORM to the crusty old Strongly-Typed DataSets.

When I was on that team, the thing that we worried about was, what grain should we build our repositories at? We didn’t have simple models, we had domain aggregates: we’d load a ThirdPartyAdministrator, which had many Clients, which each had a number of Accounts of different types, each of which had different options and sub-objects. So, what kind of repositories should we build, and what methods should they have? If we want to load the Client’s Accounts, should we load the ThirdPartyAdministrator, find the Client, and get its Accounts? load the Accounts directly? load the Client, and get its Accounts?

For a ridiculously simplified example, but to give you the flavor of it, say we load the ThirdPartyAdministrator, the aggregate root, and go from there:

class ThirdPartyAdministratorRepository
  def load_tpa(id)

tpa = ThirdPartyAdministratorRepositor.load_tpa(42)
client = tpa.clients[client_id]
accounts = client.accounts

That’s too coarse; do we really have to load the TPA before we can get the client we’re after?

class ClientRepository
  def load_client(id)

class AccountRepository
  def load_account(id)

client = ClientRepository.load_client(client_id)
accounts = { |id|

That’s too fine a grain, too low-level; we don’t want to have to muck around with Account IDs.

client = ClientRepository.load_client(client_id)
accounts = client.accounts

That might be a good middle-approach.

It comes down to knowing your application’s data-access patterns, and your domain’s constraints. If you often need a chunk of data, all together, you should probably have a repository for it. If one piece of data depends on another, your repository probably shouldn’t make you get them separately.

With Rails’ ActiveRecord, all this is sorted out for you – you define your associations, it provides all those querying methods, and you use the correct ones for what you need. With repositories, you have decisions to make – you have to design it, and design is choice. But choosing is work! and you can choose inconsistently! sometimes, it even makes sense to! I’m curious to see how the Rails community, with its culture of convention, tackles this. And for myself, I plan to check out DataMapper at some point.

Cover your Moleskine in Brown Paper

(He’s kidding, right? He didn’t really cover his moleskine in ugly brown papeAUGGUGHHUH)

(..UGUGHAOMIGOD, he actually did. So gross.)

Ok. Did you hear the story about the reporter who interviewed Steve Jobs about the iPod, and Steve Jobs was outraged that the reporter’s iPod was in a protective neoprene case, which made it a) look ugly, and b) not gradually pick up that “scratched stainless steel” patina? Maybe this is like that. Maybe a brown paper bag is uglier than sleek faux-leather. Maybe a moleskine should look like it doesn’t often drink beer, but when it does…

Or maybe raw brown paper is DIY-chic. Maybe you can’t tell your moleskine from everybody else’s. Maybe your notebook already takes enough abuse. Maybe a brown paper cover is a good idea.

Whatever. I got the idea for this about a year ago, and did it just to see whether I could. (The moleskine elastic, as you’ll see, makes this a little trickier than your typical book covering.) I’ve done it several times, because I kind of like it. I finally googled today to see whether anyone else had instructions up for this, and was surprised I couldn’t find any. So here we go!


  • Your uncovered moleskine notebook. I’m using a large, but I’ve also done this with small ones.
  • A brown paper bag. For the large notebook, I’m using a bag that’s 7 1/16 x 4 1/2 x 13 3/4; for small notebooks, you can use something as small as a lunch bag.
  • scissors
  • a pen
  • packing tape (optional – just for reinforcing some weak joints)

The Easy Part – a pretty ordinary book covering

This part is just like the book coverings you maybe made in school.

Cut down the seam of the paper bag, and cut off the bottom, so you have a large sheet of brown paper.

Fold the top and bottom edges of the paper down, so the book is the same height as the paper.

The paper folds create a sleeve, and you want to be able to slide the front cover into it. In fact, slide the notebook’s front cover into it now, and fold it back, around the book. If it looks like this:

…then cut the extra paper, so it looks like this:

Where it gets different

That elastic is getting in the way, right? Unwrap the book a bit, we’re gonna use the scissors – but read through this part all the way before you start cutting.

If you measure, you’ll see that the elastic is 1/4″ wide, and 3/4″ from the edge of the book cover (sorry for the blue-and-purple):

The trick is to cut some of the paper off of the cover-flap, so the elastics can get out. In this picture, I marked the parts to cut out with a black marker (the green arrows):

Make sure you cut on the back flap, not on the back cover. For some reason, I always screw this up – I want to cut the back cover. Don’t do that. Cut the back flap.

Leave at least 1/4″ between the cut and the fold. I cut a trapezoid shape, which makes it a bit easier to put it together, but it’s not that important. Here’s how it should look when you’re done:

You don’t have to reinforce this section with packing tape, but I’d recommend it – with the larger notebook, the elastic jerks this part of the cover around a lot, and the packing tape will make it last a lot longer. Don’t forget to do the bottom half, too.

Take the whole cover off the book – it’s easiest to put the back cover on first. Slide it in, so the elastic pops out of the cuts you just made:

Ok, this is the tricky bit – getting the cover on, and the elastic arranged right. Close the front of the book under, so you’re still looking at the back side of it:

Fold the elastic around the spine (to the right, in the above picture), so it’s holding the book shut. Here’s a close-up of the top of the book:

The hard part’s done! Wrap the cover around the back of the book. Before you wrap it around the front, take the elastic off again – you’ll need to open the notebook to get the front of the cover on. Open the front cover of the notebook, and slide on the cover flap. Pop the elastic back on, and…

All done!

Side Benefits

Moleskines famously sport that back accordion pocket. Covering one in brown paper like this means you can add two more:


Redder Pastures

What the hell happened? I mean, I don’t care for “I haven’t been blogging because…” posts either, but it’s been quiet here lately, hasn’t it?

The explanation comes in two parts:

After I announced I was releasing WordCram, I worked like mad on it. In my last post, the one announcing WordCram, I said “There’s still work to do, but that’s the fun part,” but I had no idea. And it’s not even a big library, or do anything useful! And there is certainly still work to do. I have a new, visceral appreciation for how much open source software developers give us. That’s the first part.

But all that stopped last April, when my employer began going through some – I guess “changes” is a safe enough word. Nevermind what they were. It got me thinking it was time to find a job I liked better. The job search is the second part of the explanation. I didn’t want another ordinary-business kind of job, but I didn’t know which direction to head in. After  sinking myself in some dataviz, science, and Ruby, talking to a bunch of excellent people, and finding some luck, I got a spot on the SeeClickFix team, doing Ruby on Rails, and helping citizens improve their community.

Get a great job, working in a great language, making the world a little bit better:

I start in September, right before I start classes at Ruby Mendicant University. It’s been a busy spring and summer, and it’ll be a busy fall, too.

And at some point, I have some WordCram things to finish…

WordCram: Open-Source Word Clouds for Processing

I just released a project I’ve been working on for a while, called WordCram.  As the title says, it’s a Processing library for generating word clouds.

I found a few years ago and really liked it, and after seeing the code for Algirdas Rascius’ Scattered Letters on, I tried making some of my own.  It was fun, but I thought it ran too slowly to bother bundling it into a Processing library.

After reading the Wordle chapter from Beautiful Visualization, I learned a few new tricks, and it’s a bit faster now, so here it is.  There’s still work to do, but that’s the fun part.

After OsCon 2010

OsCon 2010 is done, and I’m pooped. I met some great people, the talks were good, and I saw some promising ideas and technologies. Portland is a great city, with free public transportation, good beer, veggie-friendly restaurants, and Mt. Hood close by. What more could you want?

Here’s my highlights and impressions.


Rolf Skyberg explained where corporate innovation initiatives come from, and Simon Wardley talked about innovation. Those links are to the talk descriptions, but you can watch Simon’s talk on youtube, since it was a keynote.

As a company ages, Rolf says it gets more risk-averse, and that stifles innovation. He names each life-stage of a company for its most prominent employees: innovators, rock-stars, proceduralists, optimizers, and vultures. Once the company becomes so risk-averse that new ideas are stifled, and it starts losing money, the CEO assumes the problem is a lack of new ideas, rather than a culture that can’t absorb them.

As a technology matures, Simon says it gets more stable and ubiquitous, becoming a commodity. This “creative destruction” frees us up to do more interesting things.

I’ll be going back over their presentations, thinking about the commonalities between their talks.

Google’s Go

Rob Pike’s talk Public Static Void gave some context around Google’s new(-ish) language, Go, which I’d pretty much ignored. A few choice bits:

  • “there’s a false dichotomy between nice & dynamic & interpreted, and ugly & static & compiled”
  • Scala is “beautiful and rigorous”
  • (my favorite) “a language should be light on the page”


I got to show Processing to a bunch of people, which made me happy — Processing is a great tool, and a lot of fun. Kathryn Aaker was there, and she even made a sketch on the flight home.

I also talked with a guy whose name I can’t remember, and whose card I didn’t get, about how his friend used Processing to teach math concepts to his kids. That’s a pretty amazing thing. Take that, Mathematician’s Lament!

Scala, Mirah?

I really enjoy Processing, but…Java. Can we have something fast, but with closures and easy syntax, please? Either Scala or Mirah might meet that need.

Mirah is a Java compiler that reads Ruby-like syntax: looks like Ruby, but it’s still Java. That seems promising, but I don’t think you can use, say,, since it’s not part of Java’s core library.

Scala is a functional/OO hybrid language that brings closures and higher-order programming to Java, with a helping of type inference. It seems promising, but it also seems like a lot of features mixed in together; compared to Io or Scheme, there’s tons to learn. But maybe that’s the wrong way to look at it — maybe it’s close enough to Java that it’ll be fairly quick to learn.

Powell’s Technical Books

Powell’s books is humbling, and amazing. There are whole sections I’m not even smart enough to understand. I still walked out with three books, though.

I'm a book fiend

The first one is The Philosophical Programmer, which I’d never heard of, but for $6, I had to grab it.  (Yes, it’s an old library book.)  I got A Little Java, A Few Patterns because I loved The Little SchemerGrammatical Picture Generation is about writing tiny languages that generate fractal-type images, something I’ve been playing with recently.  And I actually bought Beautiful Visualization at the conference itself, not at Powell’s.  It’s fantastic, though, I read it the whole flight home.

OK!  Enough fawning over books, I’m embarrassing myself.

Asynchronous JavaScript

My team’s been bogged down lately by some ASPX pages with very complex javascript behavior. Somewhere between Stratefied.js and Reactive Extensions for JS, there might be a way to tame them.

Stratefied.js introduces new language constructs to javascript to implement concurrency semantics. I’m not 100% on the semantics themselves — they bear looking further into, but they don’t seem terribly complicated. The part I thought was neat was how they’re implemented in all browsers, even geriatric IE6:

<script src="stratefied.js" type="text/js"></script>
<script type="text/sjs">
  /* your code here, including new syntax */

Notice the type attribute of the second script?  Once the page is loaded, Stratefied.js loads all scripts of type “text/sjs”, and does some source-transformation, turning the new constructs into (I’m guessing) gnarly, but standard, javascript.

Reactive Extensions for JS come from open source’s best friend Microsoft. The gist is this: asynchronous coding with call-backs is hard, but if you treat events (from the user, from ajax HTTP, or whatever) as a collection that you can subscribe to, and you can map and filter those collections with anonymous functions, it’s easier. We’ll have to see. The speaker, Erik Meijer, gave a pretty similar talk at MIX.

Badges, with Ribbons

my badge

They took some flak for the ribbon color-text, especially for the desperate perl hackers, but they were pretty good about it. They even asked what ribbons we’d like to see next year, so we don’t have to customize quite so much.

Inspiration and Awesomeness

The world is full of inventive, stubborn people doing really cool things to make the world better. helps microfinance banks run smoothly. Arduino and Plumbing making hardware hacking accessible to whole new audiences.  OpenSETI wants to involve programmers more in finding whether we’re alone in the universe.  Code for America can help our government be more efficient and transparent.  If you ever wanted to start contributing to open source, joining any of these projects should be a great start.

Pretend You Were There!

Or re-live the experience, if you were!  Here’s the keynotes on youtube, and photos on flickr.

Disable Your Links, or Gate Your Functions?

It’s pretty common to disable links and buttons that cause updates, so those updates don’t happen twice, and re-enable them when the update has finished.

At work, our app’s links are usually wired to javascript functions that use jQuery to scrape the form data and post it to web services via ajax. We normally disable links and buttons something like this:

var updateLink = $('#updateLink');  // Find the link. {       // When it's clicked...
   updateLink.disable();            // disable it...
      data: getFormData(),          // ... & send the form data
      url: 'http://someWebService', // to some web service.
      success: function(results) {  // When the service
         if (results.hasErrors) {   // finishes,
            showErrors(results);    // show any errors,
            updateLink.enable();    // and enable the link
         }                          // so they can try again.

We added those enable() and disable() functions to jQuery — they just add or remove the disabled attribute from whatever they’re called on. But it seems Firefox doesn’t support disabled on anchor tags, like IE8 does, so we couldn’t stop the repeat-calls that way.

We got to thinking, what if the link always called its javascript function, but the function could turn itself off after the first call, and back on after a successful ajax post? That led to this:

function makeGated(fn) {
   var open = true;
   var gate = {
      open: function() { open = true; }
      shut: function() { open = false; }

   return function() {
      if (open) {

makeGated takes your function, and wraps it in another function, a gate function (it “makes your function a gated function”). When you call the function it creates, it will only call your function if the gate is open — which it is, at first. But then, your function can decide whether to close the gate (that’s why the gate is passed to your function). You could use it like this:

var updateLink = $('#updateLink');  // Find the link.
   makeGated(function(gate) {       // When it's clicked...
      gate.shut();                  // shut the gate...
         data: getFormData(),       // ...same as before...
         url: 'http://someWebService',
         success: function(results) {
            if (results.hasErrors) {
     ;  // Open the gate
                             // so they can try again.

We dropped this in, and it worked pretty much as expected: you can click all you want, and the update will only fire once; when the update completes, it’ll turn back on.

The downside? Since it doesn’t disable the link, the user has no idea what’s going on. In fact, since the closed-gate function finishes so quickly, it seems like the button’s not doing anything at all, which might even make it look broken.

So we chucked it, and hid the links instead. It’s not as nifty, and it’s not reusable, but it’s clear enough for both end-users and programmers who don’t grok higher-order functions. Even when you have a nice, flexible language, and can make a sweet little hack, it doesn’t mean the dumb approach won’t sometimes win out.

Where the Abstraction Leaks: JavaScript’s Fake Arrays

Ruby arrays have a nice feature: you can construct a new array with an integer N, and a block, which will be called N times, to fill up the array: { 'yo' }
# gives:
["yo", "yo", "yo", "yo", "yo"]

# Closures, too!
i = 0 { i = i + 1 }
# gives:
[1, 2, 3, 4]

I tried to recreate this in JavaScript:

new Array(5, function() { return "drip"; });
// gives:
[5, function() {
    return "drip";

Oops! I guess the Array constructor works differently in JavaScript. No worries, we can just call map on the new array.

new Array(5).map(function() { return "drip"; });
// gives:
[, , , , ]

…um, what? Shouldn’t that be ["drip", "drip", "drip", "drip", "drip"]? If I call new Array(3), I should get a brand new array, with 3 slots, all set to undefined; and I should be able to map over it, and fill up the array.

Let’s see what its elements are:

var array = new Array(5);
array[0]; // undefined, as expected
array[1]; // also undefined

So far, so good. What arguments are passed to the function?

function printAndDrip(arg) {
    return "drip";
}; // prints nothing, and returns [, , , , ]

It looks like the printAndDrip function is never being called, almost like the array has no contents.

Let’s try setting a value manually, then mapping:

array[2] = "hey there"; // [, , "hey there", , ], as expected;
// prints "hey there", and returns [, , "drip", , ]

So, it only calls the function for values we’ve manually put there. Maybe map doesn’t call the function if the value of a slot is undefined? I know, I’m reaching here…

array = [1, undefined, 2];;

/* prints:
then outputs:
["drip", "drip", "drip"]

So it does call the function for undefined values! Then why didn’t it in our newly-created array?

This is when it hit me, and it’s a funny JavaScript fact that I always forget: JavaScript has fake arrays.

They’re actually closer to hash tables, whose keys are numbers. ["zero", "one"] is just syntax sugar: it creates an object with two properties, named 0 and 1; 0 points to “zero”, and 1 points to “one”.

// pretty much the same:
var arrayLiteral = ["zero", "one"];
var objectLiteral = { 0: "zero", 1: "one" };

Apparently, if you use the new Array(10) constructor, it creates an array with length 10, but with no named properties.

We can see the properties an object has with the hasOwnProperty method, so we can use that to test our hypothesis.

var emptyArray = new Array(10);
emptyArray.hasOwnProperty(0); // false
emptyArray.hasOwnProperty(1); // false

var fullArray = [1,2,3];
fullArray.hasOwnProperty(0); // true
fullArray.hasOwnProperty(1); // true
fullArray.hasOwnProperty(99); // false: gone past the end

So where does that leave us? Nowhere, really. At least I’m a little clearer about JavaScript’s fake arrays. Imitating Ruby’s Array constructor is pretty much out; it’s easy enough, though a bit unsatisfying, to hand-roll our own:

Array.filled = function(n, fn) {
    var array = [];
    while(n-- > 0) {
    return array;
Array.filled(5, function() { return "drip"; });
// gives:
["drip", "drip", "drip", "drip", "drip"]

Perhaps the folks working on the new JavaScript standards can put in a line-item about initializing Arrays with all the right numbered slots, and that’ll be unnecessary.

While writing this post, I used the JavaScript Shell 1.4 in FireFox 3.6.3 on Windows 7. I also redefined Array.prototype.toString to display JavaScript arrays the way you type them.


This is one of my favorite javascript tricks, because of its effort-to-payoff ratio.

Problem: the default Array.prototype.toString hides any nested structure.

[1, 2, 3, 4, 5].toString(); //-> "1, 2, 3, 4, 5"
[1, 2, [3, 4], 5].toString(); //-> "1, 2, 3, 4, 5"

Solution: override Array.prototype.toString.

Array.prototype.toString = function() {
    return '[' + this.join(', ') + ']';

[1, 2, 3, 4, 5].toString(); //-> "[1, 2, 3, 4, 5]"
[1, 2, [3, 4], 5].toString(); //-> "[1, 2, [3, 4], 5]"