After the Last Page

I find so much of reading a book takes place after I finish the last page. For me, someone still relatively new to reading books for pleasure, I find books really grow on me after I’m finished reading them.

Part of it is definitely letting my brain gel on the topic I’ve just read. After I’m done a book, it usually mentally goes on my back burner, but I often find myself making mental connections to what I’ve just read pretty often after I finish reading.

Ideally, I’d like to formalize this process a little better, by taking more time to reflect on the books I’m reading (among other things). I’ve never been a super thorough note-taker, but it seems like a good way to reflect on what I’m reading. (It also kinda feels like work to me, which is perhaps why I don’t take reading notes!)

But there’s value in this extra churning. Even if a book is kind of a slog to read, I’ll usually try my best to finish it, because I’ll often get more value out of these books after they’re done than while I’m reading them. It’s these extra connections, made with other books I’ve read or experiences I’ve had, which draw out the value in a book. I suspect the more books I read, the stronger this gets.

Speed of Light

The Modern Prometheus

“What’s the number one killer, worldwide?” asks Jason Brennan, CEO and founder of Frankenstein, Inc, a stealth mode startup Speed of Light is bringing you exclusive coverage of. We’re sitting in the Geneva Lab of their Palo Alto campus, where he’s talking about his company for the first time.

“More than cancer and heart disease and malaria, the number one killer worldwide is of course death itself,” Brennan answers. “We could cure all the other diseases, but eventually humans will still die of natural causes, so why even bother curing malaria or whatever? What we’re doing is much bigger than that.” Frankenstein’s plan is kind of ingenious: users take a daily anti-death supplement to help slow, but not stop ageing. A user death will still eventually occur, but Frankenstein has a revival device which they say is extremely successful at user revival. Web services typically measure their uptime by how many “nines” of uptime they have (e.g. 99.99% is four nines). Brennan says their revival units are good for five nines of revival odds.

“My mother always told me about money, ‘you know you can’t take it with you when you go.’ Her solution was to enjoy your money and be charitable while you can,” Brennan says with a smile, “but I’d rather just not die in the first place.” Brennan said he’s doing this by following his mom’s advice, funding Frankenstein with the vast majority of his personal wealth. “But I’m still charitable; I’ve donated lots to teach kids Javascript, there are just so many jobs out there still, so what better way to help the kids.”

Brennan seems either unaware or unconcerned about the irony when asked about his startup’s namesake, “I mean everyone’s seen a Frankenstein movie, but I like to think our approach is a little more civilized.” When asked how it compares to the book, he said he “[hasn’t] read the book yet, but it’s on my list. I heard it’s written by a woman too which is good because I’m trying to read a few books by women, you know?”

Frankenstein is still in private testing for now, but plans to launch a public beta this winter in Europe. Despite their challenges, Brennan is excited. “We think the launch is going to be out of control. We think it’s going to be a runaway hit.”

Don’t Terraform Mars

Yesterday, Elon Musk unveiled SpaceX’s spectacular vision of interstellar space travel and the colonization of Mars. Their video, while dazzling, is scant on details (which as visions go, is fine), but it’s the detail at the very end of the video which leaves me unsettled: the terraforming of Mars.

I think terraforming Mars (the act of altering a planet’s climate to be similar to Earth’s, with breathable air and bodies of open water) would be a huge mistake. Yet if you look around much of the tech world, nobody is even questioning it.

SpaceX’s vision is suggesting, without displaying even a cursory amount of thought, that we should dramatically and irreversibly alter the fundamental climate dynamics on an entire other planet. Mars has plenty of water locked in ice, we just need to warm the planet up and bingo bango, we’ll have lots of liquid water to splash around in.

This is bad for two reasons:

First, we don’t yet have a very good track record of building an advanced technical civilization that doesn’t totally ruin the environment of a planet (e.g., Earth). I’m thrilled Elon Musk works on electric cars and solar cell technology. Both technologies are necessary for an environmentally friendly technological civilization, but neither are sufficient for one. We need much more: a strong fundamental indoctrination of environment respect and preservation, new systems of government and (crucially) education to help populations thrive in new frontiers. There’s probably a lot more I can’t even think of, which brings me to…

Second: hubris. It’s incomprehensibly hubristic to think terraforming another world is a mere technological detail to be glossed over and figured out later. We can build space-faring rockets, what’s so hard about radically overhauling a climate? The hard part isn’t so much the physical alteration of a planet (we’ve managed to do that quite well on Earth, and we didn’t have to think about it!), but how to think about altering a planet. We’re not enlightened enough to deal with that, yet.

I am in full support of exploration of our Solar system. I think it’s crucial to our learning as a species, as representatives of Earth. We stand to gain so much by exploring new worlds, like where we came from, like if we have siblings among the stars. And eventually, yes, I hope that we’re ready to one day thrive on new worlds, but we have so many questions to answer first.

While we do have some international law governing what nation states can do in space:

outer space, including the Moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means

We don’t have much precedent for companies attempting to claim ownership of celestial bodies.

What makes us entitled to the rest of the solar system? Is it ours to do with it what we please? Is it our manifest destiny? To let our capitalism, which has thus far ravaged our home planet, extend endlessly into the vastness of space, pillaging ever more worlds?

As usual, Carl Sagan implores us:

What shall we do with Mars?

There are so many examples of human misuse of the Earth that even phrasing this question chills me. If there is life on Mars, I believe we should do nothing with Mars. Mars then belongs to the Martians, even if the Martians are only microbes. The existence of an independent biology on a nearby planet is a treasure beyond assessing, and the preservation of that life must, I think, supersede any other possible use of Mars.

I don’t have answers to these questions, but we desperately need to explore them before we start fucking up other planets. They are not a technical detail to be figured out later, they are among the most important questions our species will ever ask.


Further reading:

Dear Old Friend,

Have you ever done a thing and then wince at the very thought of it basically as soon as you’ve done it and then forever? That’s basically what I do, all the time. It’s fun, you should try it.

I sent you a message a few minutes ago and in my head I was like “Oh hey I’ll just make it really short and peppy and that’ll be good.” thinking to myself how’d it’d been a long time and so I didn’t want to send you a long diatribe masking anything. I’d just be all aloof and that’d be an easy way to start a conversation.

But oooh, there’s that embarrassment creeping up on me.

The internet is so tremendously weird. It’s lovely and it’s terrifying all wrapped up into one big mess.

I wish catching up with people on the internet was more like the Dandy Warhol’s (“A long time ago, we used to be friends”.. I know the song is more about moving on, but it’s catchy and fun, whatever) and less like “I’m lonely and it’s Friday night and we used to be friends, so let’s ‘Connect’ on Facebook” bleh.

Is there a nice middle ground that doesn’t involve one person sending the other a longish message out-of-the-blue? (oops) Or that doesn’t feel like bad nostalgia? Probably not.

Anyway, I was thinking to myself lately about how I’ve really connected with exactly 5 people total, ever, in my life, where I’ve had regular, honest conversation and that’s one of my favourite things (you’re one of those people, of course).

I’m guessing there’s like a 90% chance this message is just going into a void somewhere. Or like maybe one of your distant descendants will discover one day, some kind of Indiana Jones-like character, spelunking around the internet, trying to discover relics of the ancient past and they find this. Sorry, if that’s the case.

More sincerely,

Jason

You Don’t Have to Buy an iPhone Every Year

When I was a broke university student, I used to look toward the future when I’d be a well paid software developer. I thought to myself, that’ll be great because I’ll be able to afford a new iPhone every single year! That’s what All True iOS Developers do, right? If you read the Apple blog / twitter world, that’s certainly what you’ll hear. We buy a new iPhone every year; that’s what we do.

I’ve been hearing a lot of grumbling about the impending iPhone 7 and its supposed lack of a headphone jack. John Gruber jacked off about it last week, and lots of people are talking about it. Ugh, that’s really going to suck if they get rid of it, right? What am I going to do if I can’t use my headphones?

Here’s a suggestion I can’t believe I have to make: maybe don’t buy the new iPhone? I mean, if you’re an iOS developer, presumably you’ve got a fairly recent model already… there’s no real need to buy another one, especially one you seem a little sad about.

I never ended up buying a new iPhone every year, either. So far I’ve been getting one every two years. By this logic, my iPhone 6 would be up for replacement with this year’s iPhone 7, but now we’re at the point where this two-year-old model is so good even today, I feel no need to replace it. It’s still mighty fast, has a great camera, great battery. It’s a perfectly good device; replacing it would be a waste.

And that’s the other thing, too. It’s a waste of money to get a new phone every year, but it’s also a waste of resources (do you really need 5 iPhones sitting in their boxes, collecting dust?). It’s wasteful on the environment, and I dunno, rampant consumerism just doesn’t seem like a great thing, either. I’d love to get 5+ years out of a phone, wouldn’t you?

So, if the idea of losing a headphone jack on your phone seems unappealing to you, remember that you don’t have to buy it.

Amusing Ourselves to Death

Over the weekend I re-read Neil Postman’s fantastic Amusing Ourselves to Death, which I can’t say enough good things about. Seriously, this book is about as Jasony a book as they come, and no doubt a large influence on what makes me Jasony in the first place (previous post about the book).

If you haven’t read the book (shame on you), it’s essentially about how media shape the kinds of public discourse we have (specifically politics, current affairs, and education), and how America’s shift to a predominately television-centric country diminished its ability to have serious conversations about these issues.

Postman argues public discourse in America was founded at a time of pervasive (book) literacy. The media of print entails memory: arguments can be complex and built up over pages, chapters, and volumes; the reader must take time to think, process, and remember what they’ve read; books allow us to learn the great ideas of history and of our current society. There were (and still are) plenty of junk books, but books and print supported well-argued, serious discourse as well.

Conversely, in television we find a medium of entertainment. Like print, there is much junk content on TV, which is just fine. The problem, Postman argues, is when television tries to be serious, because it fails in spectacular ways. Television is an image-centric medium, and as such it’s impossible to have complex, rational arguments for or against anything. Think about how dreadfully boring a “talking head” is on TV news, and those usually only last for a few minutes at a time!

Where print requires you to remember, television requires you to forget. Instead of long, coherent discussion, you have a series of images strewn together which are almost meaningless. In his chapter “Now…this,” Postman looks at tv news as an example of this. Most news segments last about 60 seconds, and are placed in an incomprehensible order. A devastating mass murder, now a political gaffe, now a car recall, now unrest in the middle east, now an advertisement for retirement savings. Not to mention immediately following the news is Jeopardy.

Amusing Ourselves Today

“But Jason!” I see appearing in a thought bubble over your head, “the book was published in 1985, when television was the media in America, but these days its been displayed by app phones and the Web. Is this book still relevant in 2016?” Absolutely, unequivocally, yes.

The good news is, some software allows for interactivity and personal agency. Through email, blogs, and forums (i.e., written word), we can have complex, well-reasoned discourse (I said can). We can even improve some of the shortcomings of the printed word, by pulling in various sources via links, by including images and interactive, responsive diagrams and graphics, and by collaborating with many people around the world.

Software does not require us to sit quietly, mouth agape, awaiting amusement. But today’s software does ask us to do so, relentlessly.

Much of what we do with app phones is largely incoherent. I’ll read an email from a friend, now I’ll check twitter, now I’ll check Instagram, now I’ll write some code. And too often, even just within one of these apps it’s all incoherent. First, remember that for the overwhelmingly large majority of software users, today’s social software is “what you do” with a computer or phone; Facebook is the computing experience for many people. And within an app like Facebook or Twitter or Instagram, you have a series of things strewn together in a “feed.” An article about Donald Trump, now your cousin’s baby’s 2nd birthday, now (lol) a video of this goat who faints when its scared, now hey cool an ad for Chipotle.

Or take Instagram for example. True, you’re consistently getting images, but that’s about it. There’s no space for discourse on Instagram. Image dominates, and the strongest message you can really send is a “like.” There is literally little space for discussion, and the discussion is largely irrelevant anyway. Instagram shows, it does not discuss.

Books and Beyond

My interpretation of Amusing Ourselves to Death is its thesis goes beyond books and television, and again focuses more on how media relate to discourse. It’s not to say that the printed word is some kind of ultimate medium for discourse, just that it’s presently much, much better at it than is television (and I think, most of our software, too). There’s nothing wrong with media that entertain us, the problem is when a medium only entertains us and is incapable of having cogent conversations about anything else.

That problem is just as important today as it was 30 years ago.

The Lost Art of Instant Messaging

All throughout middle school, high school, and much of university, MSN Messenger was the place for me and my friends to socialize online (if you’re my age but grew up in America, chances are you can replace MSN with AIM). MSN was an instant messaging system. You had a contact list, online / away / busy / etc statuses (with custom status messages), and usually had one-to-one chats (although you could have multiple people, too).

You knew your friends were available to chat because they had their status indicated. An “online” status meant there was a good bet if you messaged them, you’d get a response rather quickly. “Away” meant they were logged in, but probably not at their computer. “Busy” meant they were present, but didn’t really want to be disturbed. These weren’t hard and fast rules (someone could appear to be any status, but still be present anyway, and vice versa), but you generally felt a sense of presence with your contacts. You at least knew what to expect, generally, when you messaged somebody.

These days, it seems like Instant Messaging, as a concept, has largely vanished. In its place we have things like iMessage and texting (I’ll admit, I don’t have a Facebook Messenger account. Do a lot of people use this?), but we lose a lot with them. Sure, iMessage means you can send a message whenever, but you also lose the feeling of presence you got with IM.

Because there’s no concept of “online” or “away” (etc), you have no idea if the other person is available to chat at the moment. Where IM chats often felt engaging while both people were online, iMessage “conversations” feel sporadic, like a slow trickle of words back and forth. Sure, sometimes you do have bouts of back and forth messaging with iMessage, but more often than not a message is a shot in the dark (consider how gauche it is to text somebody “brb” or “gtg"). The expectation is the conversation never really ends, but in fact, it never really starts, either.

And who knows, maybe this is just me. Maybe everybody uses Facebook Messenger, or maybe everyone else just has more engaging friends they text or iMessage. I use Google Chat and literally IM with two people ever, these days. But I really miss having nice long conversations with my friends.

What about you? Do you have engaging conversations over iMessage / texting? Does everyone just use Facebook Messenger (or another IM service)? Or is it really a lost art?

Sorry Not Sorry

“You’re Canadian? You don’t have much of an accent” people tell me when they find out I’m Canadian. It’s true, I’m from New Brunswick, Canada, but I’ve never had much of an East Coast accent, and much of it has faded since I moved away from home a few years ago. I never really minded in the early years because I was a little embarrassed by it (my home region is generally considered a little backwards by the rest of Canada), but lately I feel like I’m losing a little bit of my identity because of it.

There are many telltale signs of a New Brunswick / East Coast accent. The big tell are our hard Rs (“are are harrd Rs”), though that’s common to most of the region (I correctly identified Kirby Ferguson of Everything is a Remix as an East Coaster on his hard Rs, alone). More specific to New Brunswick is our unmistakable lexicon, like “right” (pronounced “rate") to mean “very” (“it’s right cold outside”), “some” to mean “quite” (“it’s some busy at the mall”), “ugly” to mean “mad” (“she was some ugly when she heard the news, let me tell ya”). We drop suffixes (“really badly” becomes “real bad”), too. And I’m pretty sure we invented the “as fuck” intensifier (“it’s cold as fuck right now,” “I’m tired as fuck”) long before the internet caught on to it.

I took a linguistics class in university (which I highly recommend, by the way), and we learned about language extinction, that many languages are disappearing and we’re left with less and less as time goes on. I asked my teacher why this was a bad thing, but I kind of got a funny look (I meant the question genuinely, not in a rhetorical or smarmy way; at the time I didn’t really understand why a lack of diversity in language was so bad). I think I understand the general sentiment a little better now.

Since moving away from home, I’ve definitely lost much of what I had of an accent. When you’re not surrounded by speakers of your dialect, it’s feels weird using words or sounds you know will stand out to people you talk to. My Rs have softened, my “eh"s have disappeared, and even the most quintessential Canadian word has changed: my “sorry” has gone from the Canadian “soar-y” to the American “sar-y.”

It’s a weird kind of identity crisis to either sound normal to yourself, but weird to those around you or to sound weird to yourself but normal to those around you. But I’m trying to reverse course by calling it out (and by watching copious Trailer Park Boys). Though the sound of the word might change, I’ll at least always say “sorry” when I bump in to somebody—that Canadian part of me will never fade.

Mass Consumption and our Sense of Meaning

How odd is the juxtaposition between our mass consumption culture and the meaning of our lives? On the one hand, mass consumption gives us a perspective of the unlimited: there’s always more to consume, it’ll always be there, it’ll always replenish. On the other hand, our lives are inherently finite: you only get one childhood, you always figure out life too late, youth is wasted on the young, you’re going to die someday.

It’s kind of distressing to think about. Mass consumerism asks us to buy in (literally and figuratively) to the idea of limitlessness. It asks us to ignore, to not even think about, the fact that our lives are not at all limitless. There will be a new iPhone every year, the grocery store shelves will always be restocked, but I’m 27 years old and my childhood is long over and I’m never going to get another one.

Maybe it’s more comforting to think in the consumption mindset, that there will always be another book, another tv show to watch on Netflix, another hamburger to eat at McDonalds, a longer infinite list to scroll through. But it’s also really dissatisfying how little that lines up with my life, how much, in fact, it denies what my life is like. Consumerism doesn’t give me a frame of reference to make sense of my life, to understand what it means to age or to have a finite set of choices (and I bet looking at life as “a finite set of choices” only makes sense as a perspective because of consumption culture; we probably wouldn’t look at life as being limited without mass consumption as our default way of looking at the world).

I’m sure this is well covered in philosophy and I’m certainly not suggesting I’m the first person to think of, just that, jeez this sort of thing has been hitting me hard lately and I don’t know how to make sense of it.

Aliens

I wanted to expand a little bit on a tweet I made the other day about aliens in science fiction movies. There’s an opportunity in these movies to explore western society’s fears about immigration amongst Earth’s peoples (immigrants referred to as aliens), but most movies don’t seem to do this.

Most movies about aliens see them as invaders and earthlings as the heroes, defending the homeland. My friend Brian pointed out to me these movies (and fears) aren’t about immigration but colonialism. The aliens aren’t looking to join us, they’re looking to conquer us. It’s a great point, and I think it matches up with fears many people hold about immigration, but I think it’s weak of screenwriters to pander to these fears instead of exploring them.

Science fiction is a lens we use to see ourselves and our current world, it’s a way to extrapolate and play “what if?” and see more sides to our lives than we currently see today. In stories like A Brave New World and Nineteen Eighty Four, fears of oppression through technology were explored, not celebrated.

But in many of today’s alien-related movies, the fears of being taken over by aliens are reinforced, not examined. We’ve got our guns and we’re the heroes, nobody’s gonna take our land from us, we say. Why don’t we have more movies where oh, I don’t know, the aliens aren’t invaders but are refugees? Or where the hero says “Wait, hold on, are we sure they’re actually invading? Shouldn’t we learn from them before we start blowing them up?” Whether or not people really do think immigrants are invaders looking to oppress us, it’s cowardly for alien films to not examine this.

There are a few good examples, though. District 9 is particularly on the nose about aliens with a refugee status; there are humans who see those aliens as invaders, but those humans are portrayed as villains. E.T. has aliens not as invaders or as refugees, but as explorers who wish to learn. True, E.T. is a visitor, but he’s also explicitly not an invader. Despite naming the titular alien a “xeno-morph,” the movie Alien is a lot more about sexual predation than it is about invasion (the face-huggers and chest-bursters are not so subtle allusions to rape and its unwanted consequences). I’ve heard good things about Alien Nation about immigration, but I can’t personally vouch for it. And I’m sure it’s explored better in science fiction literature, too.

Immigration is a vital topic to pretty much everyone on this planet, yet fears of it are pandered to and reinforced in science fiction movies all the time.


PS: Yeah, maybe actual contact with actual extra terrestrials wouldn’t go so hot. They’d almost certainly be of vastly different intelligence, technical prowess, hell, even body chemistry (microbial exchanges alone could easily destroy us). They may not be violent invaders (that’s probably a reflection on our own evolution and history than it is on theirs), but they’d definitely have arisen from some form of natural selection, originally. But movies with “alien invasions” are hardly about presenting scientific reality, and that’s OK. An alien movie where they come here and we all get alienpox and die probably isn’t telling a very good story.

PS: Yeah, it’s also problematic to have actual aliens represent humans from different countries. Showing them as wholly different, often monstrously so, reinforces views that “aliens are other” which doesn’t help anybody.

Reclaiming #NotAllMen

Today the phrase “Not All Men” (often #NotAllMen) represents something pretty terrible. When feminists speak on the internet about the patriarchy, inevitably dudes will butt in with the phrase “Not all men!” to say, “Not all men are rapists!” “Not all men wish for inequality!” etc. I won’t go into all the details of why this is problematic because many better essays have already been written, like this one or that one.

But I’d like to reclaim this expression. I want “Not all men” to mean “I don’t want this thing to only have men.” For example, the programming team I work on currently has no female developers, so I want this team to be Not All Men, but include women (and people of any gender, too.)

I want casts of movies and TV shows to be Not All Men. I want people I see at conferences to be Not All Men. I want the CEOs and people in the news to be Not All Men.

To be clear, I know there are many women (and people of all genders) currently working very hard to achieve these goals, and I support that in every way. By reclaiming this phrase, I hope we can reinforce and help what’s currently being done. I hope the phrase can act as a reminder to us all that until we see teams of Not All Men out in the world, there’s still work for all of us to be done.

Social Media Cheesecake

I’ve been thinking more about the phenomena of social media, popularity, and expectations and I’ve thought of a new metaphor:

I’ve made a cheesecake, and I’m not a professional chef, but I’ve worked really hard on this one and I’d really love to share it with everyone, because everyone loves cheesecake. But nobody wants it, because they’re stuffed from all the other cheesecake (and pies and puddings) they eat all day, everywhere.

So of course this makes me sad. I worked hard on my desert and I think it turned out great. But social media is a potluck with way too much food. And even though you’ll only really connect with people sitting directly beside and across from you, it’s a potluck you simply must attend, because there’s so much good chow.

More Thoughts on Blogs and Conversations

The following is a mishmash of thoughts following up from yesterday’s post about blogs and conversations. The real theme of today’s post is “I don’t really know what a blog is” and “that’s OK” and “blogging will probably die” and “is it just me or are these posts getting less coherent as time goes on?”

There isn’t really a strict definition for what a blog is, but it’s safe to say a blog is usually a collection posts about something, sorted by recency, and usually with some kind of way to subscribe (RSS or Atom, or these days Twitter / Facebook feeds). The form of blogs is always kind of undulating, evolving, following the people (see The Awl’s The Next Internet is TV about this).

So blogs end up less like books and more like news or other periodicals. Yeah, the blogs I’m talking about are personal blogs, not tech “news” or what you’d typically think of as a periodical, but they are based on time. You either come to a blog because you saw a link to it (where else, but on some sort of time-based stream like Facebook or Twitter), or you come to a blog to see what’s new, (maybe from a time-based RSS reader).

The medium of the blog is all about time. Thus its content is shaped around time. That’s why so many blogs posts are about current events and that’s why it feels like blogs should foster better conversations, and that’s why it’s so frustrating they really don’t.

I don’t really know what my website is all about. Maybe it’s my web diary, maybe it’s a place for public pontifications. But definitely at some level, I’m putting ideas out into the world because I care what people think about it. At some level, I want to spark something in you, the reader. I hope what I write tickles some part of your brain so you think and ideally, respond (maybe this is fundamentally manipulative, though? there’s another post idea for the future).


See also:

Blogs and Conversations

Recently I’ve been going through Patrick Dubroy’s excellent blog archives and I stumbled upon a post titled “Blogging is the hardest ‘conversation’ I’ve ever had” which really resonated with me. Pat said:

Yesterday, after writing my post in reply to Atul, Aza, and co., I was thinking about how much work it is to put together a post like that. You often hear people refer to blogs as a “conversation”, but if that’s true, it’s more work than any type of conversation I’ve ever had.

Compare it to other kinds of group conversation we can have on the internet:

  • IM, IRC, etc.
  • Twitter and FriendFeed
  • wikis (not all wikis are really conversation-friendly, but the original wiki certainly is)
  • email, discussion forums, blog comments

Writing a blog entry in response to someone else’s is far more difficult than any of those. Partly, it’s because blogging is often slightly more structured and polished than the other methods; but there’s also a lot of overhead in the actual act of writing a post.

This has definitely been my experience too. Trying to stitch together quotes and links to other blogs is incredibly tedious and error-prone. And if you use a format like Markdown, making sure you’ve got the quotes, lists, and links properly copied over is just that much harder. Everything’s so fiddly. Is it any wonder almost nobody does it?

When I started my website in 2010, I was really excited to jump in to writing on the web. There were blog conversations all over the place: Somebody would post something, then other blogs would react to it, adding their own thoughts, then the original poster would link to those reactions and respond likewise, etc. It became a whole conversation and I couldn’t wait to participate.

But I’ve never really had much of a conversation on my website. I’ve reacted to others’ posts, but I’ve never felt it reciprocated. I never felt like I was talking with anyone or anyone’s website, but more like I was spewing words out into the void. Some people definitely enjoy what I write, some agree and some even disagree with it, but the feedback has always been private, there’s never been much public conversation.

And I get it. Like Pat said, the interface to blogging doesn’t really encourage conversation, which makes blogging feel anti-social and lonely. My guess is blog comments were a way to make things feel more social, less isolated, but unless a lot of thought is put into them, comments become a total shitshow almost immediately (see Civil Comments, a promising attempt at fixing this). RSS lets readers subscribe to your posts, but you have no relationship with these people; ideally you want your readers to be peers so you can read their blogs, too.

There’s a lot of talk about the death of blogs, and it’s easy to understand why. Blogs are a lot of work to set up, they’re often fiddly to get right, people feel an urge to put out their best selves, and they have a terrible interface for being social. Not to mention how terrible writing on a touch screen is.

Luckily, there are still a few of us nuts around still writing on the web, who don’t really care if “blogs are dead” or not. But we sure could use some company.

Software Development Stage of Dread

Sometimes I’m developing on a particularly difficult task, maybe it’s a bug I can’t quite squash, or a feature I’m a little stuck on. But sometimes, when I get to that hard part, instead of hunkering down on it, my brain says “oh well, time to go see what’s on the internet!” This is the Dread stage of software development.

Between you and me, the logical part of my brain knows, yes, this is a bad path. When I encounter a hard problem, skipping off to the internet is the last thing that’s going to help me. But obviously there’s a compulsion in there that makes me do it.

This is pretty much procrastination 101, where I don’t want to do the hard thing, so I go do the easy thing instead. But I think it’s also compounded by working from home all the time: I don’t really head to Twitter to see cool links, but instead to hear from people. That’s unfortunately one of the messed up parts of Twitter: humans are mixed in with brands, and everyone seems to be linking off to something they find interesting; there never seems to be a lot of human conversation (other than impossible to follow shouting matches).

I’m not trying to excuse heading off to the internet, but I am trying to understand why I do it because I’m hoping that will help me prevent doing it.

This Dread stage only gets worse as time goes on: the less I focus on the hard problem, the harder it becomes. So the “obvious” solution is to keep a longer focus on the problem (easier said than done). But the underlying solution, I think, is to feel more engaged with the problems I’m working on. While I find working for Khan Academy to be immensely fulfilling, every app has its share of mundane bugs and features. I need to remind myself, yes, maybe this random UI bug feels pointless, but it’s in service of a greater goal (helping millions of learners have access to a free education). And it’s really hard to see that, especially when I’m a developer looking at code that could be in any app, that in fact this isn’t just a random bug, it has positive impact far beyond the bug itself.

It’s so easy to get lost in the minutiae of every day hard problems, and it’s so hard to remember, sometimes, why I bother. But I think it’s worth it in the end.

Programming is Performance Art

I heard this idea years ago (and naturally, can’t remember where), but it’s been in my mind ever since: programming is performance art. I’m not talking about the act of programming per se (although that could also be considered a performance), but that the result of programming is performance art.

Chances are, the things you and I program today won’t exist as programs in even just a few years. OS APIs, platforms, dev tools, even hardware, all continuously change, so much so that today’s apps will soon enough start to rot. It’s hard to use a piece of software unchanged for more than 5 years; more than 10 is almost impossible.

Software is not a medium that preserves itself. Old software is best preserved in writing, pictures, and movies (media whose own digital formats are still subject to rot, but it seems at least less so), but rarely can you directly execute the software itself. You can watch a video of Doug Engelbart’s oNLine System but you can’t play with the software itself (thankfully you can play with a Xerox PARC Smalltalk system, though).

There are some workarounds, but they’re rare. Writing for the web browser seems to be a good way at achieving some degree of longevity (Javascript in browsers seems to be quite stable, but maybe the dev tools aren’t). Writing and maintaining one or more layers of virtual machines seems another route, although I worry that’s just shifting the problem down a level of abstraction. I’m sure there are other solutions (ship the platform with the app?), but these are exceptions: the way software exists today is temporary.

The main way to prevent software from rotting, it seems, is to maintain it: update it so that it continues to work as the platforms supporting them change underneath. In this sense, though, it’s not the same software you started with, as it’s continuously changing. You can’t stand in the same river twice, they say.

It seems this is the way software is meant to be: a thing that exists, for a time. Software is not a book or a painting, software is a Broadway matinée or a parade. It may happen more than once, it may go by the same name, but every time it’s different.

A Refreshed Speed of Light

I’m still in meta post land, and today I wanted to briefly touch on the slight redesign of my website (if you’re reading this in a feed reader, take a sec and poke around the real site). Here’s what’s new:

  1. Boosted the type size way, way up. I’d been meaning to do this forever, but a recent essay about accessibility tipped me over the edge. Everyone can read big type, but not everyone can read small type, simple as that.

  2. At the same time, I lightened the look of the page a bit: gone is the heavy black border around the page; instead I’ve got a lighter border, which feels representative of the old look, too, without weighing the page down.

  3. Similarly, I moved the giant mast head below the first post. When you come to a post, you probably don’t give a crap the name of the site, and instead just want to start reading. If you really want to “click to go home” at the top of the page, you can still do that anyway, there’s a big invisible space at the top that’s a link to the homepage.

  4. I got rid of the responsive jazz. When I last redesigned the site, “Responsive” sites were all the rage, and I used a column based CSS framework. It was nifty, but ultimately way overdoing what is essentially a 1 column website. Now that column is centered. Finally.

  5. The site should still look great on mobile (where the design has become even lighter, and finally, Futura Condensed Extra Bold mast head on iOS!).

  6. I fixed the El Capitan bug where all the type looked bold? wtf Safari? (I would have fixed this sooner but I have yet to upgrade my machine, and I was honestly hoping Apple would have fixed the bug by now. Ah well, fixed now).

That’s essentially it. Most of the changes are relatively small (except for the type, which is relatively big), but I think it makes for a much more readable experience.

If you find any problems or have any feedback, please do let me know!

How I Write Every Day

Yesterday I talked about my guidelines for writing every day and today I want to talk about how I write every day. As I mentioned yesterday, regularity, without rigid rules, has been pretty key for me, but it wasn’t really clear to me until I gave it some thought, how to go about doing this.

In terms of physically doing the writing, I usually do it every morning before work and then publish more or less immediately after (let Twitter be your copy editor!). Writing first thing in the morning has worked really well for me because my head is mostly clear when I first wake up. I try to stay off Twitter / social networks before I get started, because they often pollute my head (sadly this is true any time of day) and make it harder to focus on what I’m trying to say.

Each posts takes me around half an hour to write, depending on how long the topic is and how much of a groove I’m in (as mentioned yesterday, this has gotten easier over time but I still struggle from time to time).

This groove is something I strive for, and it’s made easier by obsessively thinking about what I’m going to write before I start typing it out. This is your standard “literally walk around outside with the idea in your head / shower thoughts” sort of thing, but I find it helps me explore points I want to make in the post. As I’ve mentioned before there’s no real “true form” of the idea, what’s in my head and what gets written are different, but thinking about the idea before writing it definitely helps. And because I write one post per day, that means I get about one day to pick an idea and let it bounce around my head before I write about it.

The idea, which I keep in a todo list, tend to come from three primary sources:

  1. My idle thoughts while going for a walk, riding the subway, doing the dishes, or writing other posts. I tend not to listen to music or podcasts while doing these activities and instead let my time be my time (i.e., don’t kill time).

  2. Conversations with people. Jeez this is a great way to get ideas, take them from your friends! But more seriously, riffing with someone is a great way to explore ideas. (I wonder, what would a writing medium look like if was based on riffing with people?)

  3. Reactions to things I read elsewhere, be they books or posts, or industry trends (in my head, many of these posts start with “I got a lot of problems with you people!” in George Costanza’s voice). Sometimes I rant, but often seeing or reading something inspires a little nugget of an idea, which eventually grows into a post.

When I have an idea for a post, I try to write it down as soon as possible (I embarrassingly forget them sometimes) and leave any notes I can think of on the subject so I’ll have something to start with when I revisit.

That’s about all I can think of for my writing process. It’s not perfect but it’s been working well for me. Though I’m writing mainly to get the ideas out of my head, I try my best to write accurately, to not assert anything I’m unsure of, and to note when I plain just don’t know what I’m talking about. I don’t want anyone to treat my writing with authority, but I’m so glad when people like what I write. It’s the best mental exercise I’ve ever done.

If any of this sounds like fun to you I highly recommend giving it a shot, and please let me know when you do, I’d love to read it.

Writing Every Day

I’ve been writing (and publishing) every week day on my website for almost two months now and it feels incredible. And it was a lot easier than I expected. Here are the guidelines I run with:

  1. Post one thing almost every weekday.
  2. Write it when you get up in the morning, before you start work (I work from home, so that helps).
  3. Publish it when people are awake.
  4. It doesn’t matter how long or well researched it is, really (but try not to write junk).
  5. If I’m sick or on vacation or just really can’t post, don’t sweat it.
  6. Do this until I don’t want to do it anymore.

That’s basically it. I’ve been unusually consistent (for me) at this in part because I treat those as guidelines, not hard and fast rules. Normally when I set a goal for myself it’s way too ambitious, I feel overwhelmed, and I bail on it. The usual me would have said at the start “I’m going to publicly commit to publishing one post per day, every day, for the next year.” and then I would have failed after 2 weeks.

But with this project, I’m trying to be as lax as possible. I wanted to write every day because I had a backlog of ideas to write about and because it was a good motivator to get out of bed a little earlier every day. I have no real goal in mind of write for a year or anything like that, I just want to do it until I don’t want to do it anymore. That feels so much easier and less of a burden than if I’d set some big lofty goal for myself.

None of my writing I’d consider truly amazing but that isn’t really the point. The point is for me to think out loud, get the thoughts out of my head, and have fun in the process. I was worried I’d quickly run out of post ideas, but my idea list is twice as long today as it was when I started (and that’s not counting everything I’ve written about in the meantime), so there’s no real end in sight (until at least, I get to a point where I don’t want to write any of the ideas in my list).

Writing every day has made it a lot easier for me to “just write” and I think it’s made me a better writer, but I absolutely still struggle from time to time, too. Sometimes I can just crack my knuckles (ew) and crank something out and it’s awesome. But other times I’ve struggled, deleted attempt after attempt, and eventually switched topics for the day.

It’d be easy for me to say “So, I’d failed at my projects goal and instead decided to do this writing-every-day goal instead, aren’t I smart?” but in reality it only looks like that in hindsight. The two were mostly unrelated. It just so happens that writing every day has helped me get into a better habit of practice and improvement, but it wasn’t done as an alternative to my failed goal.


(Huge credit also to my friend Soroush Khanlou, who wrote a post per week in 2015, he is a major inspiration. Mine are mostly furiously written and then published, but his are thoughtful, well researched, and edited.)

On the Setting and Failing of Goals

As I said in yesterday’s post, I think it’s better to be internally, rather than externally, motivated while trying to make great work. It’s better, I think, not to worry about what others are doing and instead focus on what I’m doing as a motivator for my own stuff.

And yet, I can’t help but keep coming back to this Bret Victor Showreel of his work between 2011-2012. In just two short years, Bret created (or at least, published) a prolific amount of groundbreaking work, month after month, sometimes week after week.

I also keep thinking about this (probably apocryphal) story about making pots:

The first half of the class was to be graded based on the number of pots they could create throughout the semester. The more pots they made, the higher their final grades would be. […]

In contrast, the second half of the class was told that their grades depended on the quality of a single pot; it needed to be their best possible work. […]

At the end of the semester, [outside] artists were […] commissioned to critique the quality of the students’ work and overwhelmingly declared that the craftsmanship of the pots from the first half of the class was far superior to those of the second half.

The lesson I took from all of this was, if I wanted to make really great stuff, I have to be prolific, I have to make a lot of stuff, iterate on it, learn from it, improve it, and finish it.

So I set a goal for myself near the end of 2015: I was going to make and publish one project per month. These projects were to be mostly research prototypes of neat interfaces I’d been thinking up; I’d research them, prototype them, iterate, then write and publish a little essay at the end of each month.

It’s nearly April and you may have noticed: I have not at all succeeded at this goal. It turns out, this goal was pretty hard for me for a few reasons:

  1. Research, prototyping, iterating, and writing take a lot of time.
  2. I have a fulltime job.
  3. I enjoy spending my free time with my wife, friends, and family.
  4. I can’t seem to stay focused on things, or at the very least, I’m easily dist
  5. Finishing and shipping things, even prototype demos, is a challenge for me.

I’ve released one well-researched essay project pondering Xcode for iPad, but other than that I haven’t been too successful at my goal of making a ton of projects. I have, however, been writing a lot. But more on that tomorrow.

Motivations of Popularity

Yesterday I wrote a bit about popularity and how I deal with (the lack of) it. Today I want to dive a little deeper into why I even care about it. Despite me writing about it this week, I don’t normally spend a whole lot of time consciously thinking about popularity or being liked or well known or respected. But it obviously matters to my brain at some level.

At the core, I think it’s part of being a human: we’re innately social beings and generally speaking, that’s a good thing. It feels good to our brains to be liked, to be a part of the group, to communicate with our friends, and, I suspect, our enemies, too.

Today’s online “social networks” definitely exploit this though. We’ve had this innate social ability for hundreds of thousands of years, and suddenly things like Facebook show up and majorly amplify our social tendencies to an extreme degree, and that makes us behave strangely.

What used to be a joke told to a physically present group of friends is now shared with hundreds of people on Twitter. Where I might expect a few in-person chuckles over the span of several seconds before, on Twitter I feverishly refresh to see if anyone has “hearted” or retweeted my quip. Did anyone like it? Does anyone think I’m funny?

Maybe I’m more socially obsessed than I’d realized. But I feel like today’s online social networks severely subvert what it means for humans to be social, in ways we haven’t adapted to yet.

(See also danah boyd:

i started wondering if social media is dangerous. Here’s what i’m thinking.

If gossip is too delicious to turn your back on and Flickr, Bloglines, Xanga, Facebook, etc. provide you with an infinite stream of gossip, you’ll tune in. Yet, the reason that gossip is in your genes is because it’s the human equivalent to grooming. By sharing and receiving gossip, you build a social bond between another human. Yet, what happens when the computer is providing you that gossip asynchronously? I doubt i’m building a meaningful relationship with you when i read your MySpace CuteKitten78. You don’t even know that i’m watching your life. Are you really going to be there when i need you?

)

Kottke recently linked to this video about creating and popularity that I really enjoyed:

Adam Westbrook talks about Vincent van Gogh and the benefit of doing creative work without the audience in mind.

It’s a wonderful video discussing van Gogh’s prolific work, even when nobody was buying his work. Westbrook argues van Gogh wasn’t motivated by onlookers or social success, but was instead motivated by autotelic goals:

Mihaly Csikszentmihalyi describes people who are internally driven, and as such may exhibit a sense of purpose and curiosity, as autotelic. This determination is an exclusive difference from being externally driven, where things such as comfort, money, power, or fame are the motivating force.

The video doesn’t really address today’s social landscape. Yes, van Gogh theoretically could have had a physically close social group (or a distant social group, as with his brother), but he couldn’t have had a social group with thousands of people like we have today. He wouldn’t have seen likes and favs and retweets whirl by him every day, and he wouldn’t have felt the same social pressures we have today, either.

I think internal motivation is ideal, and it’s something I strive for myself (make awesome shit that I’m proud of, and don’t care so much what others think), but I think it’s unfair to feel bad about caring what others think, too. I also think it’s important we examine why we feel so socially overwhelmed online these days, too (or at least, why I feel that way; I don’t wanna drag anyone else in with me), and that we demand better from social networks like Facebook and Twitter (like, for example, the work of Joe Edelman).

Like So Popular

Popularity has a much bigger influence on my work than I’d like to admit. I try really hard to not let it bother me, but the truth is if something of mine becomes popular, it feels great, and if it doesn’t, then it feels crappy. The worst part is, it often feels random to me what’s going to become popular: I’ll spend weeks perfecting something I really care about and it goes nowhere; other times I’ll crank something out in 20 minutes and it becomes really popular.

This is frustrating. And it’s a big problem, not in the sense of how much it impacts me, but in the sense that there are a big number of interrelated parts to it. Like, why does popularity matter to me? what’s the point of my work? what are my real goals? what can I do about it?

I’d like to explore those in future posts, but for now I want to look at how I’ve learned to deal with (not) being popular.

(I’ll say right away, I’m proud of my work and I think I’m a good developer / writer, but I don’t think I’m amazing. I don’t think my work is always fantastic, or “deserves” to set the world on fire, but that it’s usually solid enough.)

The simplest way to make popular stuff seems to be to make it easy to catch on.

Give your stuff a click-baity title.

Make it catchy, give it a hook, etc.

Write about something very controversial.

People love lists and they love one sentence paragraphs, too.

That seems to be one way to make popular stuff (though I’m not entirely sure because I’ve never tried it myself), but that’s always felt kind of scammy to me. And it may very well be a way to make popular stuff, but I’m not sure it’s a way to make good stuff.

The other way to make popular stuff, we’re told, is to make really great stuff and then it’ll “just catch on.” That’s how the myth goes, at least, but I don’t really buy it. I’ve known lots of people who write or make truly awesome stuff but rarely does it catch on. Meanwhile, I’ve seen others make very popular but otherwise mundane stuff, too. Yes, it’s good to make great stuff, but it makes a huge difference if you have friends in high places, too.

But for me there’s a third option, and that’s to try my best to let go of the popularity urge. It used to really bother me, I’d obsess over it, I’d get jealous of other people whose work was popular but I felt they hadn’t earned it, and it was all really immature and never made me feel great. But letting go helped.

When I started obsessing a little less over the popularity of my work, I started realizing something really great: the few people who did really like my stuff were people I often super respected. Some of these were family members, some were close friends, some were even popular developers and writers whose work I tremendously admired. This core group of people who really like my work make a world of difference, and make “being popular” a lot less important.

I realize that’s kind of a cop-out (like when a company declines to hire you and you say “Oh that’s good, I’ve decided I actually didn’t want to work there anyway"), but that’s what works for me. Popularity affects me, of course it does, but I try not to let it get to me, and instead I focus on the people I respect who happen to really like what I do.

Stating the Obvious

Here’s a thing that’s happened to me a few times in life: someone will say something novel, in a talk, a book, or in everyday conversation and I’ll say to myself “Well, of course!” The thing being said feels obvious now that I’ve heard it, but in reality, I probably wouldn’t have made the right connections on my own.

The first example that popped into my head is from a Bret Victor talk, where he says:

It’s kind of obviously debilitating to sit at a desk all day, right?

I heard this and thought, well yes, of course. It made complete sense once I’d heard it, but I don’t think I’d ever explicitly thought about it previously.

That’s the sort of stuff I often write about, too. I’m not writing groundbreaking stuff, but I am trying to make some connections I (and you) might not have otherwise made. It might sound obvious when you read it, but my hope is by writing it down, by giving it a name, whatever obvious thing I write about becomes just a little bit more tangible.

I haven’t read enough on the topic, but my guess is by giving something a name and making it more tangible, it’s easier to do things with that something. It’s easier to incorporate that named idea into what you know, what you think about. It’s easier to talk about that idea. It’s easier to apply, compare, and contrast that idea with other ideas. And not to mention, on the web, you can literally link ideas together (until all the links rot and you’re cursing the underlying architecture of the web again…).

So maybe you’ll read this and think “well duh” and I’m fine with that. But it wasn’t obvious to me.

Complaining About Recruiters

If you’re a professional software developer there’s a good bet you’re pretty regularly emailed by recruiters trying to get you to join other software development companies. Developers are in such high demand there are whole teams of people whose job it is to try and hire us. As developers, we’re incredibly lucky.

And yet, the most common reaction I hear among developers on the public complaint service Twitter is that of dread. “Ugh. Another recruiter email, god.” “These recruiters are so lame, trying to get me to join this dumb startup.” “Way to send me an obviously generic letter.”

I’ve got to say, straight up, fuck that attitude. Our jobs are in such high demand that we’re regularly sought after by people hired to seek us out, and the general reaction is “ew stop”? I’m not sure developers realize how rare our situation is, how many non-developers search for months and months trying to find a job, when nobody’s hiring, and yet all we have to do is check our inbox once a week. Compared to nearly everyone else, we sound like spoiled brats.

Now I’m not saying there are never good reasons to complain about recruiters. Sometimes they’ll get your info wrong in careless ways (like addressing you by the wrong name, even though your email is your name), and that’s sometimes offensive. Sometimes recruiters are too aggressive. Perhaps you have legitimate concerns about how the company represented treats women, and you want to write a thoughtful, public article about why that is. But I think these sorts of complaints are vastly different from the “ugh, why do I get so many recruiter emails??”

We’re incredibly privileged as software developers, and we’re lucky to be so sought after. But when we complain about too many recruiters, we sound like snots to pretty much everyone else. Maybe we should reflect more on our lucky position, because it won’t last forever.

Your Feelings are Wrong

Has this little exchange ever happened to you?

You: I don’t like [this software]

Them: Why not?

You: I just don’t. I find it hard to use.

Them: No it’s very easy to use, you need to learn how. Here are the docs. How can you still not like it?

The basic gist is “I don’t like this thing” and the response is “Justify why you don’t like this thing” and it happens all the time. I run into it in all aspects of my life but I find it’s especially egregious amongst programmers.

There’s a mismatch here: I’m trying to share my feelings about something, and programmers are trying to get me to prove my feelings. I’ll admit, I’m often not so good at rationalizing my feelings but that’s kind of the point: I’m not trying to rationalize anything, I’m just saying how I feel about something.

But feelings often seem anathema to programmers, where rationality reigns supreme. If you can’t justify or prove a feeling, then it’s often treated as invalid.

For me, this is a morale destroyer. I’ve found trying to have a conversation with programmers so frequently devolves into an argument (in the “having a debate” sense of the term; a non-heated, civil discussion, but an argument nonetheless), and while arguments are useful for finding logical conclusions, they’re terrible for friendly conversation or chitchat.

I’m reminded of this quote from an excellent interview with Matt Webb:

Do you remember those kids in school who were really good in debate class? When they’d start treating every single conversation or interaction that way you wanted to remind them that not everything is debate. Sometimes you’re just bantering in the park.

And I get it. I’m a programmer too and I’ve done it too. And I’ve done it outside of my programming life, and it sucks. A non-programmer friend of mine bought a new camera and said “Wow it’s so great, it’s got a 10x zoom lens, a 4 megabyte storage—” I interrupted my friend and said “I think you mean 4 gigabyte storage, but go on.” The conversation abruptly ended; my friend’s demeanour went from joy to disdain almost immediately. And I felt like an asshole because I was an asshole.

Programming forces us to be so technically correct all the time we often end up forgetting how to be humans. But not everything has to be justified, not everything has to be correct all the time. There’s more to life than just being right.


See also Denial and Error

Universal Norms

Recently I was reading this post by the Facebook design team discussing the evolution of their new “Reactions” feature. It’s a neat article about their design process and the motivations behind the feature, but one thing in particular stuck out to me:

The whole point of expanding reactions is to have a universally understood vocabulary with which anyone can better and more richly express themselves. […]

Reactions should be universally understood. Reactions should be understood globally, so more people can connect and communicate together.

At the core this is an OK goal: “universally” understood ideally means any other person in the world should share the same meaning of a Reaction with you. But there are a few problems with this:

As Patrick Dubroy pointed out on Twitter, “7 icons is not about ‘a universally understood [vocabulary with] which anyone can better & more richly express themselves.’” They may be fine icons, but they’re a far too limited palette to express very much.

More troubling to me is at best, aiming for a universal norm homogenizes cultures into a lowest common denominator situation. We have so many cultures around the world, such a rich diversity of ideas, beliefs, and ways of living. I’m personally quite unfamiliar with most of the world’s cultures, but the solution to that isn’t to genericize them or translate them into my culture, the solution is to help me understand them as they are.

If all I ever see represented from other cultures are the things they have in common with mine, I’m never going to learn about what makes those other cultures different or special. I’m never going to empathize with them or the people who participate in them. All I’m going to do is reinforce what I already know, and the cultures I don’t know better will remain “other” to me.

Facebook’s mission is “to give people the power to share and make the world more open and connected” but distilling the diversity of all people and cultures down to seven genercized icons is no way to do this. If you want to make the world more open and connected, then you have to give people the means for empathy and understanding.

Moving Movies in Virtual Reality

About a year ago I saw this video about the beginnings of new kinds of filmmaking in virtual reality. This is something that had never occurred to me before but seemed like a really cool perspective: instead of watching a movie on a screen, you are immersed in the movie.

There are all kinds of exciting prospects brought by movies in VR: what does it mean to be a “viewer” of a VR movie? is the viewer essentially the camera? what role does a director have in shaping that viewer-as-camera experience? does the viewer have control of their point of view during the movie?

But there’s one thing I never hear discussed and that’s the role of the body in VR movies.

Watching a movie today requires sitting for two or more hours, especially if multiple people are watching. As I wrote about previously, moving is a big deal and being stationary for hours long is not healthy. But the medium of film basically demands it. The movie plays start to finish with no letting up (with no intermission for you to get up and stretch your legs (unless you’re watching 2001: A Space Odyssey), unlike most theatrical plays). The director-controlled camera point of view means as a viewer, you only have a stationary perspective on the film and you can’t move that around.

The easy thing, the default thing even, to do in virtual reality moviemaking is probably to keep things largely how they are today. Viewers are seated for the duration of the film, but maybe they can control it with handheld remotes. A bunch of people plopped down on couches with screens glued to their faces for hours at a time.

But there’s no reason why this has to be the case. VR is a new medium which sheds many of the constraints of the 2D-on-screen, body-destroying movies we’re used to. VR movies can incorporate the body, can require the body, and have the viewer be an active, moving participant in the movie, instead of a passive onlooker, seated for hours.

(PS: look at where you watch movies in your home. I bet that space would be a little dangerous if you suddenly strapped goggles on and ran around in it for two hours, right? I think “VR” should be a transitional technology, kind of like how we started with black and white photography: it’s a great start, but we should strive for better. Likewise, VR is a blinder, and not just for your eyes: your other senses, like touch and smell, don’t really get to play at all; you end up waving your arms in the air with no sense of physicality. We should treat VR like the obviously transitional technology that it is, and demand dynamic environments that incorporate more of the body and its senses.)

Sitting Standing Moving

There’s a great series of posts by Rishabh R Dassani about Sitting, Moving, and Standing. Without giving too much away, the gist is:

  • Sitting for long periods of time is detrimental to your health
  • You need to involve movement into your day
  • Standing desks aren’t really a great alternative; they have their own serious health issues

I’m someone who works from home at a sitting desk, so these posts have really hit home to me. In my default state, I sit for most of my day, for hours at a time. Working from home means I can even cook my meals in, so I don’t really need to go too far at all during the day (I try to get out of the apartment for lunch most days though, unless the weather is bad).

Though we should all be wary of folk-medicine, what Rishabh is describing seems to make some sense. Sitting or standing all day are deleterious to your health; the key is move around throughout the day.

I’ve been following his suggestions for a few weeks now and anecdotally I feel better (I’m probably not actually healthier, but it feels good to get up and moving again). Every half hour I get up and walk around my apartment for about 5 minutes. Once or twice a day I try to go outside for a longer walk (I’m not super diligent at this, yet).

This is good for now, but it’s not a long term solution. My line of work demands me sitting down, staring at a rectangle. When I heard Bret Victor’s “Humane Representation of Thought,” this part really stuck out to me (54:00):

It’s kind of obviously debilitating to sit at a desk all day, right? And we’ve invented this very peculiar concept of artificial exercise to keep our bodies from atrophying, and the solution to that is not Fitbits, it’s inventing a new form of knowledge work which actually incorporates the body.

I can’t change my line of work overnight. I can’t start making iOS apps by walking around in some kind of computational environment for knowledge work (but I imagine, given such a technology, people would still try to make apps for old media!), but I can think about it, and I can do my best given what I have. What do you think about that? How would you do your job if you couldn’t be stuck sitting at a desk all day?

Human Nature

There’s a part in Debbie Sterling’s fantastic talk “Think Audacious” about girls, boys, and engineering. She was told:

So I started becoming obsessed with why there are so few girls and women in engineering and what I could do to change that. I started talking about it with everyone I could, and asking them, how did you get into engineering, and why are there so few women and girls? And I started to hear the same response over and over again: you can’t fight nature.

Seriously, smart, educated people would tell me, you know, there are just biological differences between men and women.

And they told me, you know, men just are naturally inclined toward building and engineering, they’re just good at it, you know? They’ve got spatial skills, they’re born to be engineers. Well, this really pissed me off. It did, I mean I got into engineering, does that make me a freak of nature or something?

Debbie did her research and discovered, of course, this is total bullshit. This is not human nature, but instead human culture. (Debbie, by the way, has since created an launched a successful engineering toy company for girls called GoldieBlox)

I hear the term “human nature” thrown around a lot as a defence, usually for something unjust. Racism, sexism, homophobia, they’re all “human nature” according to some people. And sometimes, it’s actually true that humans have innate, inborn tendencies right in our DNA which are unsavoury at best.

But none of those should really matter. Even if it’s in our DNA to be racist, or to fear others unlike us, or to think girls can’t be engineers, even if all of those things were true, none of that should matter, because culture helps us break through the limits of our DNA. We do have a human nature, but more often than not that term is used out of fear of change.

It’s culture, not our DNA, which acts as the driving force for what makes us truly “human” today. Anatomically modern humans have been around for 200 000 years but yet we’re vastly different from these ancestors because of our culture. We’ve transformed from hunter-gathers to city builders, to flying and space-exploring technological, relatively peaceful beings.

There’s both good and bad in human nature, but relying on either in the face of change is a losing strategy. It’s culture, not just our DNA, that’s going to make us better people and give us a better world.

Advice to a High School Student about a Career in Programming

A few weeks ago I was emailed by a student in grade 9 about a career in programming. She had some great questions and I thought I’d share my answers, in case you know any students considering a career in programming or software development.


What subjects/ courses were the most beneficial to you in preparing to become a computer programmer?

Math is kind of the most important subject for preparing to be a programmer, but maybe not for reasons you might expect.

Most programmers don’t use “math” very often in their everyday programming lives, at least not directly. So, you don’t really do calculus, or trigonometry, or really anything like that in most day-to-day programming.

But you do learn a few key things from math (which, by the way, were never clear to me in school! it was only much later in life, looking back, I kinda realized “Oh hey! that’s what’s so great about math!”)

  1. Math is abstract

    Numbers don’t “really” exist in the world: they’re something humans invented, as a model, for understanding quantities of things. So we take a concrete thing, like a bunch of bananas on the ground, and we create an abstract concept of “6” bananas so we can think about them without needing to necessarily always have that many bananas handy. Same goes for trig and calculus too. Planets don’t actually orbit around the sun according to the laws of Newtonian integral calculus, but we use calculus as a way to represent concrete things in an abstract form.

    This is pretty much what programming is! Programming is all about abstractions.

    In one sense this means one piece of code can output different things depending on what you input to it (like, Google will show you different search results for different searches, but it’s the same code that does it).

    In another sense, this means you can think about things at different “levels” of abstraction, depending on what’s useful. For example, if you make an app that saves a jpeg to your computer, your code doesn’t have to think about how that “works.” Your code for saving the file is the same whether you’re saving on a normal harddrive, or a USB thumbdrive, or saving to Dropbox or whatever. That stuff is abstracted away from you as a programmer, which is very useful!

  2. Math uses its own notation

    Math kind of has it’s own “language” (x, y, square root symbols, matrices, integration symbols, etc). Programming kind of has its own language too, “code.” Code uses its own symbols (most of which are different from math), but just like it’s helpful to “think” in the language of math, it’s helpful to “think” in the language of programming (also like math, this takes practice!).

  3. Math is probably the closest thing we have to breaking problems down (but it’s not super great at this)

    Kind of like my point about abstraction above, math is useful as a way for breaking problems down into smaller parts, which is something you do all the time in programming. In math, you might do this when you’re asked to “show your work” but in programming, you use it more as a “divide and conquer” strategy.

  4. Math teaches some logic (but not a lot)

    Logic can be pretty important in programming (although sometimes overrated, some programmers are “too logical” and forget how to be humans). But logic helps you reason about how programs should work. Logic helps you say “I know it’s impossible for the program to produce incorrect results” (but, haha, you will often find human logic is faulty. A lot)

And depending on what sorts of things you program (the field is very broad!), you may actually use some forms of math.

If you do any kind of programming that involves putting something on a screen (like making apps, making webpages, etc), you’ll probably use some basic geometry from time to time (you can think of a computer screen as a grid: X pixels wide by Y pixels tall, so being able to think of a screen like a Cartesian plane is useful).

If you do 3D graphics (video games, computer animation, etc) you’ll probably use a bit of calculus, and a lot of linear algebra (which I’m not sure if they teach it in high school, I only learned it in university—it was like the only math class I actually liked!). But anyway, pretty much all 3D graphics you ever see are the result of doing stuff with matrices and vectors.

If you want to do things like artificial intelligence or “machine learning” (both of which are big topics for search / social network companies (e.g., advertising companies) these days), algebra and statistics are pretty useful too.

BUT! It’s programming really isn’t all math. There’s a bit of it, but it’s mostly foundational (which is lucky, because I’ve always been pretty bad at math; also it turns out computers are pretty good at math!).

Other subjects are quite useful too. A scientific mindset is useful because of its emphasis on investigation and scepticism. Graphic design / art is very useful if you want to make programs people use directly (apps / websites, primarily), because things shown on a flat 2D surface like a screen needs graphic design just as much as posters and magazine layouts do, too. Oh also English is great because well, you’ll still need to communicate with lots of humans when you work with them! Being able to read + write well is always useful, especially if you’re good at arguing for things with essays.

Around how many languages do you have to learn? Which ones are the most important?

(I’m assuming you’re talking about programming languages here, and not human spoken languages. If you’re talking about human spoken languages, English is pretty much the lingua franca of programming, but I suspect knowing Chinese and/or Hindi might be good ideas for the future too!)

The neat thing about programming languages is there are many of them, and which one you use depends on what kinds of programs you need to make (or, how you want to make it).

For example, if you want to make iPhone apps, you pretty much need to know either “Objective C” or “Swift” (these are the languages I use every day at my job). If you want to make websites, you pretty much need to know Javascript (you may also need to know others for web programming, but you definitely need to know Javascript at a minimum).

The good news is, once you learn one programming language, most others are pretty easy to learn, because at a fundamental level, all programming languages work pretty similarly. And while you’ll probably become a pro with a few languages, it’s never a bad idea to eventually become familiar with a bunch of them (not something you need to do when you’re starting out, of course, but good for down the road). Although all programming languages are essentially equivalent, many let you program in a different style, which often means you-the-programmer are thinking in a different mindset.

It’s kind of like with human spoken languages: each language has its own metaphors, it’s own way of expressing concepts. In some languages, it’s really easy to express some ideas but hard to express others.

What degree is needed to be successful?

The neat thing about programming is you don’t need a degree to be a programmer. The vast majority of programming jobs don’t require degrees, most are interested in experience (but there are loads of jobs for beginners too). If you want to work at Google, maybe they’ll require a degree, but the vast majority of jobs don’t.

That doesn’t mean you shouldn’t get a degree, of course! There are lots of benefits to getting one (or many!) but I just wanted to point out it’s an option.

The most common degrees for programmers are Computer Science (this is what I have) and Computer Engineering (if you do this degree in Canada, where I’m from, you’re legally considered an Engineer, you get a ring and everything). They’re kind of similar, but I’ll explain each a bit:

Computer Science: This varies depending on which school you go to, but at a fundamental level, Computer Science (CS for short) teaches you a lot of theoretical stuff about how computers (mostly software, but some stuff about hardware) work. You will learn programming, but CS won’t really teach you how to be a good programmer necessarily (that is, CS usually isn’t about on-the-job skills.. you mostly learn those making programs on your own, or, actually at a job!).

CS will teach you things like “how does logic work” (you’ll take classes like Discrete Mathematics where you’ll learn how to do logic and proofs), you might take courses on computer graphics (like how to program 3D things), you might take courses on computer networks (like, how does information get from one computer / device to another over the internet, across the whole planet?), algorithms and data structures (like, I have a lot of information, how do I put it in a format the computer can work with quickly? this is the sort of thing Google is made of).

CS will also leave you with a feeling of “oh my god how does any of this stuff actually work and not break all the time???” Which is both awesome and terrifying. Like, how do wifi signals work, through the air, invisibly, and so fast??

Software Engineering: I know less about this because I did a CS degree instead. But, SE is more focused on the “engineering” aspects of making software: things like reliability (how do I ensure my program won’t break?), doing specifications for clients (the government needs us to build them software, we need to figure out precisely what they need), that sort of thing.

Both CS and SE involve programming, and both do overlap a little bit.

I’d also like to suggest, though, thinking about different degrees (and yet, still being a programmer!). So as I mentioned, you don’t strictly need a degree in CS or SE to become a professional programmer. You will learn some programming in those degrees, but you won’t end up being a great programmer just because of those degrees. Like most things, being good at programming is usually a result of just lots of practice / experience.

But so anyway, I’d suggest at least exploring the possibility of other degrees too, because they’ll give you different perspectives on the world, which I think are desperately needed among programmers (self-included!). For example, most programming languages today are considered to be “object oriented” languages (which basically means every part of your code is a self-contained abstract object that represents a thing. You might have an object for a Person or a Dog or a Rocket or a Feeling.. basically anything!).

Anyway, the concept of making “object oriented” programming languages came not from someone who studied computer science, but from someone who’d studied biology (Alan Kay). Alan thought about making programs act kind of like cells in our bodies (self-contained, independent, but communicated amongst themselves).

My point here isn’t “do biology instead of CS” but consider taking something else because it will give you different perspectives while programming.

So, biology is a neat example (as are most of the sciences). The humanities are another idea, because, well, humans are important! Also having a degree that forces you to read a lot of books means you’re going to be exposed to a lot of ideas and perspectives.

What is your favorite aspect of programming?

My favourite aspect of programming is its potential to have vast and monumental positive impact on the world.

From a certain perspective, programming has already done this! Computers (laptops / phones / etc) are everywhere, and they’re connected through pervasive networking. And that’s super, incredibly amazing! As a consequence of this, information is much, much more accessible to all those connected (Wikipedia, Google, even looking up things on Youtube! And, I’m a little biased because I work for them, but Khan Academy is pretty great too!). Communication is more or less effortless: within seconds you can be texting or calling or emailing or video calling (!) anyone else on the planet (!!). None of this would be possible without programming, and that’s really, really cool.

But from another perspective, I think there’s a lot more positive impact programming can have on the world. You can look at programming as a “way to make things” (like apps, websites, etc) and it definitely is a way to do that. But you can also look at programming as a way of thinking, which is something we don’t really do very often with it.

For example, when we learn to read and write human language, we do so for a few reasons.

One reason is so we can record ideas: I can put my idea down on paper (or an email or whatever) and then later, someone (maybe even future-Me) can read it and understand my idea (and of course, the invention of writing revolutionized humans when that happened! yay!). This idea-recording is good for all sorts of things, like remembering stuff (like, I can write something down and my ideas will live on after I die, that’s cool!), communicating (like this email!), or expressing ideas (like novels or essays which make arguments).

But another reason we learn to read and write (and I think one we don’t so often talk about) is because when we write, it changes the kinds of ideas that we have. Like, the very act of writing something down changes the way we think, the way we express ideas. Writing an essay isn’t just recording words you would have otherwise said aloud; writing an essay uses language in a more structured way than if you were just speaking the words. And writing an essay lets you make an argument in a bigger way too: probably nobody will listen to you speak aloud an argument for an hour (and it’d probably be very hard to make a coherent argument that way, anyway!), but they can sit and read through a written essay.

Likewise, programming can be (though these days, it often isn’t) used to record ideas, too. And similarly, these ideas are different kinds of ideas than if you were just thinking them in your head, or saying them out loud, or even writing them down.

While writing lets you put down an idea word after word, one thing leading to the next, in a very linear way, programming can have ideas that aren’t just one thing after another, but are complex. And programming lets you simulate ideas too, so you can play with them and explore them (think about reading about how cities and traffic work in a book, vs playing a game of Sim City, where the city is simulated by a program. It’s not just dead words, but active, moving things you can poke around at and play with).

This is the kind of thing I love about programming!

I imagine a world where everyone knows how to program, not so they can become professional programmers, but so they can learn how to think about complex things (like, how does the ecosystem work? how do cities work? how do viruses like HIV or the flu spread?). Just like we teach everyone to write, but we don’t necessarily want everyone to become a novelist or journalist, I want everyone to learn programming too.

I have been looking into the topic of women and minorities in the field and discovered there haven’t as many in recent years. What do you think has contributed to that?

I think there are a few reasons, but I think the biggest one is there is unfortunately a lot of sexism (and other forms of excluding POC, etc) in the programming industry. Luckily there are a lot of amazing people working very hard to correct this in our industry. It absolutely will not happen overnight (and sadly, sexism and other forms of oppression aren’t just in programming), but there are good efforts being made by really good people.

Another reason, related of course, is programming is often something portrayed as “for boys / men” and not for “girls / women.” (Which I will come right out and say is a load of crap! There is absolutely nothing about a person’s gender / ethnicity / ableness / etc which would make them less good at programming). Again, there are awesome people working on this. There’s the Goldiblox line of toys, which are designed as engineering toys for girls (this isn’t strictly related to programming, but it doesn’t hurt!). There’s also Hopscotch (a company I used to work for), founded by two women who wanted to make programming for girls and boys.

So yeah, unfortunately the industry has a sexism problem, that lots of people (myself included) are trying to help, but that’s probably the reason why there aren’t as many women / POC / etc.


So there you have it, I hope my mountain of text is useful to anyone considering becoming a programmer. Software development has its challenges, both technical and social, but it’s still a wonderful field, and it’s improving every day.