Or: “The Information: A History, a Theory, a Flood. A Review.”

Oh, I was nervous about this one.
It looked so good! Such an inviting cover, such a broad and eternally relevant topic. The Information. Rational, dispassionate. Ordered. And, “A Flood:” I half-hoped it might talk about information overload. (It does, a bit.)
But so many pages. Could I justify another pop-sci book on my To Read stack? Could I justify the time? Would it be fluff? or a difficult slog? If I’m reading for fun, I don’t want it to be harder work than what I already do for, you know, work.
Wasted worry.
The Information is a layman’s introduction to Claude Shannon’s information theory. It covers a lot of ground, and while it can be a bit slow in parts, it’s enjoyable. As a programmer, I was aware of information theory, a little, but not very clear what it was all about, or for. I was pretty sure it was lurking around behind compression, and probably positional numbering systems, especially the way they can look like dimensionality if you squint the right way, like with chmod permission bits. The Information filled in a lot of gaps for me, and showed me bridges into other fields I hadn’t expected.
Some teasers:
Gleick describes African talking drums as a way to illustrate information redundancy: two drums, high- and low-toned, mimic the tonal spoken language; drummers use long, flowery phrases to clarify ambiguity. He talks about how written language abstracts thought, and the invention of the dictionary.
He explains how information is like uncertainty or surprise. In a string of symbols (letters, music notes, numbers, bits), given a string of them, how easily can you guess the next one? If a torn piece of paper says “Kermit the Fr,” you can infer what was torn off. If I say “I got a BigMac and fries,” you can guess where I went for lunch; my adding “…at McDonald’s” doesn’t help you much – it doesn’t add much new information. (To explore this point, Claude Shannon had his wife repeatedly guess the next word in sentences from a detective novel.)
Gleick talks about information theory’s relationship to entropy. A closed system has fixed energy, but the energy dissipates: it spreads evenly throughout the system, and we can’t use it to do work. If we could re-order the energy, collect it, sort it, we could reverse entropy. Information is work.
Information is also related to computability. Sometimes, the best way to store a message is to store an algorithm for computing it.
This, in particular, is something I’d noticed. Which of these is a better way to send a smiley face? This one?

Or this one?
size(250, 250);
background(255);
noFill();
smooth();
stroke(0);
strokeWeight(5);
ellipseMode(CENTER);
ellipse(125, 125, 200, 200);
ellipse(100, 90, 10, 10);
ellipse(165, 90, 10, 10);
arc(125, 125, 100, 120, 0.2, PI - 0.2);
The first is 2D grid of pixels. The second is the code to render it: an explanation of the steps to reproduce the image.
Which is better? Which is better for making an exact copy of that image? A checkerboard, 250 squares on a side, 2502 = 62500 squares in total, and a listing of which ones should be white (about 93% of them), which should be black? Or 11 lines of text – just 222 characters? Say you had to write the message down on paper and mail it: would you rather write a list of 62500 numbers, or 11 lines of code? What would the message’s recipient have to know to reproduce the image, exactly? Pixel-for-pixel?
The Information also eventually gets into DNA, genetics, and memetics. (I never knew Richard Dawkins coined the word meme!)

So. Despite being slow in parts, the book is much better, much more enjoyable, than this review. It’ll be an enjoyable bunch of hours, and give you new ways to think about things.
(Postscript: I read this book in mid-2012, and wrote this review in October 2012, but somehow forgot to publish it. Maybe it was information overload?)
You must be logged in to post a comment.