Don’t Be Mean

A few weeks ago I saw something that made me sad: Craig Hockenberry, a Cocoa developer I once looked up to, tweeted this mean thing:

My new approach to dealing with uninvited contact:

Put yourself in Bennett’s shoes for a moment. How do you think he would feel getting an email like this? When I was starting as an iOS developer, I looked up to people like Craig. He was well known in the community, had lots of great experience under his belt, and seemed like someone you could learn a lot from. If I had sent him an unsolicited email asking about Cocoa dev, and he’d replied with something like this (and then tweeted it!), that would have absolutely devastated me.

I don’t know the all the context behind this tweet. Maybe this Bennett character is a real asshole, but that’s not really revealed in Craig’s tweet. What’s revealed here is Craig proudly sharing his mean response.

If you get a lot of unsolicited email, I imagine that’s super annoying, but it’s mean to respond like this, and it’s meaner still to publicly shame the poor guy. All Craig needed to do here was not reply.

Worse than being mean, this is sharing the meanness with everyone who follows him. I was very sad to see Dave Verwer link to it at the bottom of iOS Dev Weekly, sharing it with further more people.

And finally…

If you see this meanness shared and celebrated on Twitter or Slack or elsewhere, please stand up against it. Put yourself in the shoes of other people and try to imagine how they might read it. If you were new to iOS dev (or any community where this happens), how would this make you feel? Would you want to be the person laughing at the meanness, or would you want to be the person stopping it?

Speed of Light

Just Because It’s on the Menu

For a long time growing up I had this weird belief that if something was on a menu at a restaurant, it must be good for you. “They” wouldn’t let something be on a menu if it was bad for you. There are rules and laws designed to keep us healthy and safe. Growing up, I’d never really given it a whole lot of thought, but it was a comforting belief and it seemed reasonable.

Of course, it doesn’t hold up to any scrutiny and it’s not true. There’s all kinds of unhealthy garbage on menus. There is nothing really inherent in restaurant menus that forces them to give you choices that won’t eventually kill you. There are definitely some rules about what can and can’t be served, and there are plenty of attempts at limiting unhealthy choices, but by and (often) large, there are no built-in protections for you.

At some level, I think I held this belief about more than just restaurant menus. “Of course I don’t need to wear a seatbelt in cabs, because taxi drivers are professionals.” “Of course this book is going to be accurate, because they let it be published.” “Of course the doctor will do a good job, because they have an advanced degree.” Nevermind that people make mistakes and errors all the time!

The underlying principle, maybe, was I thought because there’s a way that these things could be made safe or healthy or somehow ideal, that of course they must be, too. What kind of world wouldn’t protect itself by default? This all probably sounds stupid, but that’s the belief I held.

Taping Culture

Isn’t it interesting that we as a culture (at least in the west) used to tape things? In the 1980s and 90s, it was common to use a VCR to record things off TV (or other VHS tapes) or record songs off the radio (or from other cassette tapes). I’m sure not everyone did this, but to my then-child eyes, it seemed like it was pretty prevalent.

What was so interesting about it was we were sort of appropriating media for our own uses. Television dictated “you watch this show when we tell you, or not at all” and taping culture said, “No, I’ll watch it when I please” or “I want to keep this around for reference later.” Radio and the music industry said “You either listen to the music (and ads) all the time, buy our tapes and records, or don’t listen at all” and again our culture had these little tools of defiance where we made audio our own.

The mix tape was a great fallout of this. Not only were we making copies, we were recombining copies as we saw fit! Maybe the perfect playlist for you had jazz and hip hop, but good luck waiting for the music industry to put out a tape like that. Fuck it, make it yourself.

Everything was and is a remix, yes, but without taping culture these remixes were often made and experienced en mass, created and consumed largely via entertainment industries. But now we could remix on our own.

Things have changed today, as they always do. For starters, most video and audio is copy protected (something tells me the industries sorta didn’t like home taping?). And with things like Netflix and Spotify, the need to record something to time shift has diminished. No real need to record something when you can just play it at will from a service, anyway. There’s also Tivo, which seems to fill the same niche as VCRs, albeit with a little more computer involved.

But it seems like the whole cultural idea of “taping” has kind of evaporated. Yes, it’s often technically possible to make copies of things (you can make or download copies of movies, music, etc), but culturally it’s not something we do as often anymore.

The closest things I can think of are apps like Tumblr, which allow you to do a kind of constant drive-by remix of a never-ending flow of “content.” This is similar, I guess, but it feels much less like you’re appropriating the media you want, and instead like you’re just redirecting copies of bits into your own personal ephemeral stream. It’s not that one is necessarily better than the other, just that it’s different.

Also cameras. With cameras in our pockets wherever we go, we now have appropriation devices. We can make crude copies of what we see, visually accurate but otherwise lifeless renditions of the world. I can and do take pictures of pretty much anything that interests me, but I also take pictures of things I want to remember, things I need to do (like travel receipts I need to get reimbursed for). I make screenshots of text conversations I want to hold on to.

The camera + screenshots are a common way we appropriate digital data on our phones, but the OS makers don’t seem to take advantage of this. The camera + screenshot + appropriation culture is brimming with potential, but relatively stunted due to the software available.

Do you think we still live in a taping culture? Has it largely evaporated in favour of large industries telling us when and what we do? Or do we as a culture still do make our it our own?

“Adam’s Tongue” book review / notes

I recently re-read (and re-loved) Derek Bickerton’s book on language + human evolution, Adam’s Tongue. I previously read the book in 2010, and I remember enjoying it, but feeling like a lot of it was over my head, so I’ve decided to re-read it with fresh eyes in 2016, and wanted to write a little review of it.

On its surface, the book is about how language evolved in humans, and how language was crucial to our evolution as a species, but what I love about this book is it’s about so much more.

One thing the book covers really well is how evolution works. It talks about Darwin and Richard Dawkins (natural selection and selfish genes, respectively), but it also talks about how those viewpoints are often limited. Bickerton really gushes about a relatively new view on evolution, that of “niche construction theory” which explains, essentially, how species are changed by their environment, but crucially, how species also change their environments, too.

Bickerton spends a lot of time not only talking about evolution, but also continuously emphasizes fallacies we hold about evolution. The big one is how we view evolution with homo-centrism: we see evolution only in terms of ourselves, and often put ourselves at the centre of it. When we look at evolution with this fallacy, we’re essentially looking at all animals / life forms in terms of how they compare to us, when in fact, evolution does not care at all about us. There’s really no centre to evolution, Bickerton says.

A specific example of that fallacy is how we often look on Animal Communication Systems as “failed attempts at language,” but really they’re just successful attempts for those animals to communicate. They’re not bad versions of language, they’re good versions of ACSs.

I’m really grateful he’s gone to such lengths to repeatedly point these sorts of things out, because I’ve found it eye-opening when considering what little I know about evolution. And, I think these viewpoints apply to non-evolution topics as well.

Another nice thing the book does is that it doesn’t hide the fact of other viewpoints on language evolution. Although he argues his disagreement with these other viewpoints, the author at least acknowledges and explains the other perspectives. He’s not running anybody’s name through the mud, but he does explain their arguments, and crucially why they don’t hold up to the scrutiny of his researcher + perspective.

In fact, an entire chapter is devoted to dismantling a theory put forward by Noam Chomsky et al about language’s supposed spontaneous evolution (I’m not sure if I’ve parsed the argument well enough to distill it here, but suffice it to say it was a thorough deconstruction). It’s refreshing to read opposing viewpoints, not so they may be shamed or humiliated, but so they can be contrasted and explored from different vantage points.

This book was an eye-opening read about language, evolution, and the history of the human species. It’s about what makes us us, and about how that very us-ness enables us to reflect on us. You should definitely read this book.

How to Read a lot of Books

Often when I suggest a book to friend, they’ll say “Excellent, looks great! Added to my forever-growing ‘to read’ list of books 😞.” I definitely sympathize with this sentiment: there are just so many books and so little time to read them. As I’m currently working my way through lots of books, I thought I’d offer some unsolicited advice on how to read a lot of books.

The first and most important thing is consistency. Find a rhythm for reading that works for you and stick to it as best you can. Plan to read every day, even if it’s only for ten minutes. Ten minutes of reading every day is a lot more than zero minutes of reading, nevery day.

If you have a commute involving public transit, that’s a great time to fit reading into your day. My commute is pretty short each day, but the time adds up. When I used to work from home I’d set aside cool-down time after work ended but before I started my evening, giving me a kind of reading commute instead.

I consider myself to be a pretty slow reader, so consistency has been the key for me. Slow and steady finishes books.

The second suggestion is to find a good reading environment, the place where you read. I find reading requires a lot of focus, so I try to read in places where I won’t be distracted. That can be almost anywhere for me, but there are things which intrude my concentration.

Phones and computers are a huge distraction. Every notification or badge or buzz destroys my focus and makes reading much, much harder. So, keeping my phone away (or off) is really helpful here. I tend to read paper books for many reasons, but one is they lack any inherent distractions!

Television is my ultimate focus destroyer. I find it nearly impossible to read (or write!) when there’s a tv on anywhere in my home. Interestingly, a crowded subway is a much easier reading environment than a home with a television on. I think it’s because tv is designed to grab your attention at all costs, and it’s very good at this. If you’re trying to read while somebody else is watching tv, try playing some music to drown it out (jazz works well for me) or even better, invite the tv watcher to join you in silent reading!

My final reading suggestion is to stay motivated about reading. This can come in many flavours, but here are the three things I do:

One, I keep a spreadsheet of all books I’ve read, with a little bit of info and a review about each of them. This helps me see my progress in getting through books, and lets me glance back at any notes or thoughts I may have had while reading. You definitely don’t have to do this, especially if it feels like work to you, but I find it’s a useful way to keep me going.

Two, get excited for your next book. Whenever I read a book, I find it motivating to think about the book I’ll read after this one. That gives me something to look forward to and it helps me finish my current book. You don’t have to have a concrete ordered list of all books you’ll ever read, but it helps to plan one ahead, one you can’t wait to get started. If your current book is a slog, this will help (and if it’s too much of a drag, maybe stop reading it?).

Three, go to a bookstore often. Nothing in the world makes me want to read more books than walking around a bookstore. You don’t have to buy a book every time (though often I do…), but I find just being around a bunch of books and book lovers really makes me want to read all the time. Seeing the books, picking some out, walking around different sections, etc. Amazon is great for many reasons, but it’s an entirely different experience than walking around a physical store.

These are my main suggestions on how to read more. It can seem like an uphill battle at times, but the more you read, the easier it gets. As they say, the journey of a thousand books begins with a single page.

Arguing on the Internet

I want to talk about something I’ve been noticing in how people converse online, in particular publicly in networks like Twitter and Slack. A lot of this conversation seems to be argumentative, which misses a great opportunity to grow understanding in communities.

By “arguing” I don’t really mean people having shouting matches or otherwise having heated or nasty conversations, I mean the literal sense of the word, having a reasoned, rational, and relatively polite debate. Most of this applies equally to the nastier version of arguing most of us think about on Twitter, but I’m going to give the benefit of the doubt and talk about the kind of arguing that happens at best on Twitter and Slack.

What I notice goes something like this: Somebody will make a statement, then one or more somebody elses will reply to that statement, agreeing or disagreeing, with reasons supporting their stance. Again, it often ends up meaner and less reasoned online, but I’m talking about the best case.

As far as debating goes, this is pretty run of the mill. But the problem is lots of subtly gets left behind. When all you’re trying to do in a reply is try to prove or disprove a statement, you ignore the nuance of what’s being said, and you don’t allow any of it to enter your worldview. There is no space for “Oh, that’s interesting! How does that relate to…” there’s really only room for “I disagree, here’s why…”

But it’s hard to fit that kind nuance into a Twitter discussion. And while Slack lets you type long messages, the flow of Slack often doesn’t leave time for contemplation (at least not in a group setting). It’s not impossible on these networks, but these media really don’t want you thinking about the subtleties. So while possible, it’s not common.

A lot of what I publish here isn’t so much to be right or wrong, isn’t so much to prove a point, but instead it’s a way for me to share something I’m thinking about so that you, reader, can see a potentially different vantage point. You may disagree with some (or all!) of it, but I hope disagreeing with it doesn’t mean you ignore everything I say.


For most of my life I’ve tried to have discerning ears and critical eyes about what I read, hear, and learn. It’s not that I’ve just taken everything at face value and believed it all. But I think in recent years I’ve started to approach what I read or hear with more nuance. Essentially I’ve started to really internalize that there usually isn’t such a thing as “the whole picture” when learning something, or as a “correct answer” when trying to figure something out. There’s no perfect political view, and there are no silver bullets.

What they teach you in school, for example, is often slightly or entirely incorrect. But even when what they teach is entirely accurate, it still leaves out different points of view, different histories, because there just isn’t enough time to delve into everything.

At their best, schools have to make value judgements about what’s most important to be taught. Unfortunately, this usually doesn’t include teaching the fact I just described, “Hey kids, this isn’t the full story, you should know that.”

I think the idea “this isn’t the full story” is a big one for me, because I’ve started to internalize there really isn’t a full story in the first place. But there are so many details we ignore if we assert to ourselves we know everything about a topic.

See also Bret Victor’s “Reading Tip #1” in his 2013 reading list.

Answers and the Meta Process

I was having a conversation with a co-worker recently where we talked about work processes, and how we don’t have all the answers figured out yet, but that we hope to find them soon. That got me thinking as to what we consider an “answer” for how we work. I’ll use the example of code review at my software development job, but this should apply, in the abstract, to any kind of thing you do at work.

Our “answer” to code review is to follow a set of steps on how to do it. This is our code review process, where we do one thing after another until the code review is done, and it works pretty well. But while the steps are easy to follow, this answer, like most answers, isn’t perfect. In particular, it has no mechanism to change itself.

But what if we get a little bit meta on our problem and say “the answer to the problem of code review isn’t so much ‘what are the steps to do code review’ but instead, by which process do we decide those steps in the first place?” Now it becomes much more interesting.

So the “answer” to code review becomes a process for finding out how to do code review. Instead of just being an unchanging set of steps, the “answer” now becomes a method for figuring out those best steps.

Day to day, this probably looks exactly how it did before we changed our point of view on it. But with this new perspective, we’re able to evolve how we do things as we go along.

This meta perspective isn’t just useful for code review, or just for job related things, but I think can be applied anywhere you need an “answer” for something. Instead of treating the answer as a finite thing, treat the answer as a process for finding answers (and go as meta as you please).

Some countries use this technique for their governments. The United States decided the answer to tyranny isn’t really a specific person or law, but instead a process for avoiding tyranny called democracy. On the surface, democracy seems similar to code review: a set of steps you follow (voting) to achieve an outcome (leaders). But democracy also includes the process by which the leaders lead by an evolving system of law, among other things.

The idea of answers as an evolving process itself isn’t definitive, and not a solution for everything. But it may be a useful tool for your cognitive tool belt.

Redefining What Success Means for a Blog

When I started this website in 2010, I knew what a successful blog was. It was a blog with thousands of subscribers, and ideally, enough ad revenue to “take the site fulltime” and be paid to blog all day. It wouldn’t hurt if you participated in a community with other bloggers, too.

That was a great definition of a successful blog in 2010 and I think it’s still a great definition in 2016, too. But damn is it hard to achieve. By that metric, I can really only think of a few select sites which should be considered successful. That’s kind of funny, isn’t it?

Let’s consider alternatives.

The biggest metric of success for me hasn’t been subscriber count (which is easy to say because I have a small subscriber count anyway), but more the quality of the people who subscribe. Specifically, I find the people who tell me “hey I love your blog” or “that post your wrote last week really spoke to me,” not only are those wonderful things to hear, but they also tend to come from people I respect tremendously.

So, one form of success: few, but highly respected people read my stuff > oodles of people I don’t really know read my stuff. (True, they’re not mutually exclusive, but if I had to pick one, I’d pick the first any day).

Another definition of success is longevity. I’ve been running this site since 2010 and it’s quite remarkable to be able to refer to 6 years of my public writing on the internet. I’ve had my ups and downs in terms of quality, but this is one of the few projects I’ve stuck at for this long. The posts may not make me money, but they’re a public outcrop of some of my thoughts, linkable for all to read.

The final definition is kind of a mix of the two: I feel a major success whenever anyone refers to my posts. I don’t just mean normal links from other blogs (although those are of course great), but when somebody refers to one of my posts to help them understand or reason about something. When somebody points to my post and says “this! this is what I’ve been trying to say!” There’s pretty much no better feeling of success than having a company you’re interviewing at say “I know you’re a blogger because we refer to some of your posts in our internal wiki as part of our dev process.” How much more 😍 can you get?

There’s a lot of talk about the “death of blogs” but maybe that’s because our definition of a thriving blog requires it to make oodles of money it just can’t these days. But if we change our definition of a thriving blog, we see many are doing pretty OK! I look around at some friend-blogs, like Ash (who in large part inspired me to start writing) and Soroush (who in large part inspires me to continue writing) and theirs are doing stupendously well today. Blogs aren’t dead, we just have outdated perceptions of them.

After the Last Page

I find so much of reading a book takes place after I finish the last page. For me, someone still relatively new to reading books for pleasure, I find books really grow on me after I’m finished reading them.

Part of it is definitely letting my brain gel on the topic I’ve just read. After I’m done a book, it usually mentally goes on my back burner, but I often find myself making mental connections to what I’ve just read pretty often after I finish reading.

Ideally, I’d like to formalize this process a little better, by taking more time to reflect on the books I’m reading (among other things). I’ve never been a super thorough note-taker, but it seems like a good way to reflect on what I’m reading. (It also kinda feels like work to me, which is perhaps why I don’t take reading notes!)

But there’s value in this extra churning. Even if a book is kind of a slog to read, I’ll usually try my best to finish it, because I’ll often get more value out of these books after they’re done than while I’m reading them. It’s these extra connections, made with other books I’ve read or experiences I’ve had, which draw out the value in a book. I suspect the more books I read, the stronger this gets.

The Modern Prometheus

“What’s the number one killer, worldwide?” asks Jason Brennan, CEO and founder of Frankenstein, Inc, a stealth mode startup Speed of Light is bringing you exclusive coverage of. We’re sitting in the Geneva Lab of their Palo Alto campus, where he’s talking about his company for the first time.

“More than cancer and heart disease and malaria, the number one killer worldwide is of course death itself,” Brennan answers. “We could cure all the other diseases, but eventually humans will still die of natural causes, so why even bother curing malaria or whatever? What we’re doing is much bigger than that.” Frankenstein’s plan is kind of ingenious: users take a daily anti-death supplement to help slow, but not stop ageing. A user death will still eventually occur, but Frankenstein has a revival device which they say is extremely successful at user revival. Web services typically measure their uptime by how many “nines” of uptime they have (e.g. 99.99% is four nines). Brennan says their revival units are good for five nines of revival odds.

“My mother always told me about money, ‘you know you can’t take it with you when you go.’ Her solution was to enjoy your money and be charitable while you can,” Brennan says with a smile, “but I’d rather just not die in the first place.” Brennan said he’s doing this by following his mom’s advice, funding Frankenstein with the vast majority of his personal wealth. “But I’m still charitable; I’ve donated lots to teach kids Javascript, there are just so many jobs out there still, so what better way to help the kids.”

Brennan seems either unaware or unconcerned about the irony when asked about his startup’s namesake, “I mean everyone’s seen a Frankenstein movie, but I like to think our approach is a little more civilized.” When asked how it compares to the book, he said he “[hasn’t] read the book yet, but it’s on my list. I heard it’s written by a woman too which is good because I’m trying to read a few books by women, you know?”

Frankenstein is still in private testing for now, but plans to launch a public beta this winter in Europe. Despite their challenges, Brennan is excited. “We think the launch is going to be out of control. We think it’s going to be a runaway hit.”

Don’t Terraform Mars

Yesterday, Elon Musk unveiled SpaceX’s spectacular vision of interstellar space travel and the colonization of Mars. Their video, while dazzling, is scant on details (which as visions go, is fine), but it’s the detail at the very end of the video which leaves me unsettled: the terraforming of Mars.

I think terraforming Mars (the act of altering a planet’s climate to be similar to Earth’s, with breathable air and bodies of open water) would be a huge mistake. Yet if you look around much of the tech world, nobody is even questioning it.

SpaceX’s vision is suggesting, without displaying even a cursory amount of thought, that we should dramatically and irreversibly alter the fundamental climate dynamics on an entire other planet. Mars has plenty of water locked in ice, we just need to warm the planet up and bingo bango, we’ll have lots of liquid water to splash around in.

This is bad for two reasons:

First, we don’t yet have a very good track record of building an advanced technical civilization that doesn’t totally ruin the environment of a planet (e.g., Earth). I’m thrilled Elon Musk works on electric cars and solar cell technology. Both technologies are necessary for an environmentally friendly technological civilization, but neither are sufficient for one. We need much more: a strong fundamental indoctrination of environment respect and preservation, new systems of government and (crucially) education to help populations thrive in new frontiers. There’s probably a lot more I can’t even think of, which brings me to…

Second: hubris. It’s incomprehensibly hubristic to think terraforming another world is a mere technological detail to be glossed over and figured out later. We can build space-faring rockets, what’s so hard about radically overhauling a climate? The hard part isn’t so much the physical alteration of a planet (we’ve managed to do that quite well on Earth, and we didn’t have to think about it!), but how to think about altering a planet. We’re not enlightened enough to deal with that, yet.

I am in full support of exploration of our Solar system. I think it’s crucial to our learning as a species, as representatives of Earth. We stand to gain so much by exploring new worlds, like where we came from, like if we have siblings among the stars. And eventually, yes, I hope that we’re ready to one day thrive on new worlds, but we have so many questions to answer first.

While we do have some international law governing what nation states can do in space:

outer space, including the Moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means

We don’t have much precedent for companies attempting to claim ownership of celestial bodies.

What makes us entitled to the rest of the solar system? Is it ours to do with it what we please? Is it our manifest destiny? To let our capitalism, which has thus far ravaged our home planet, extend endlessly into the vastness of space, pillaging ever more worlds?

As usual, Carl Sagan implores us:

What shall we do with Mars?

There are so many examples of human misuse of the Earth that even phrasing this question chills me. If there is life on Mars, I believe we should do nothing with Mars. Mars then belongs to the Martians, even if the Martians are only microbes. The existence of an independent biology on a nearby planet is a treasure beyond assessing, and the preservation of that life must, I think, supersede any other possible use of Mars.

I don’t have answers to these questions, but we desperately need to explore them before we start fucking up other planets. They are not a technical detail to be figured out later, they are among the most important questions our species will ever ask.

Further reading:

Dear Old Friend,

Have you ever done a thing and then wince at the very thought of it basically as soon as you’ve done it and then forever? That’s basically what I do, all the time. It’s fun, you should try it.

I sent you a message a few minutes ago and in my head I was like “Oh hey I’ll just make it really short and peppy and that’ll be good.” thinking to myself how’d it’d been a long time and so I didn’t want to send you a long diatribe masking anything. I’d just be all aloof and that’d be an easy way to start a conversation.

But oooh, there’s that embarrassment creeping up on me.

The internet is so tremendously weird. It’s lovely and it’s terrifying all wrapped up into one big mess.

I wish catching up with people on the internet was more like the Dandy Warhol’s (“A long time ago, we used to be friends”.. I know the song is more about moving on, but it’s catchy and fun, whatever) and less like “I’m lonely and it’s Friday night and we used to be friends, so let’s ‘Connect’ on Facebook” bleh.

Is there a nice middle ground that doesn’t involve one person sending the other a longish message out-of-the-blue? (oops) Or that doesn’t feel like bad nostalgia? Probably not.

Anyway, I was thinking to myself lately about how I’ve really connected with exactly 5 people total, ever, in my life, where I’ve had regular, honest conversation and that’s one of my favourite things (you’re one of those people, of course).

I’m guessing there’s like a 90% chance this message is just going into a void somewhere. Or like maybe one of your distant descendants will discover one day, some kind of Indiana Jones-like character, spelunking around the internet, trying to discover relics of the ancient past and they find this. Sorry, if that’s the case.

More sincerely,


You Don’t Have to Buy an iPhone Every Year

When I was a broke university student, I used to look toward the future when I’d be a well paid software developer. I thought to myself, that’ll be great because I’ll be able to afford a new iPhone every single year! That’s what All True iOS Developers do, right? If you read the Apple blog / twitter world, that’s certainly what you’ll hear. We buy a new iPhone every year; that’s what we do.

I’ve been hearing a lot of grumbling about the impending iPhone 7 and its supposed lack of a headphone jack. John Gruber jacked off about it last week, and lots of people are talking about it. Ugh, that’s really going to suck if they get rid of it, right? What am I going to do if I can’t use my headphones?

Here’s a suggestion I can’t believe I have to make: maybe don’t buy the new iPhone? I mean, if you’re an iOS developer, presumably you’ve got a fairly recent model already… there’s no real need to buy another one, especially one you seem a little sad about.

I never ended up buying a new iPhone every year, either. So far I’ve been getting one every two years. By this logic, my iPhone 6 would be up for replacement with this year’s iPhone 7, but now we’re at the point where this two-year-old model is so good even today, I feel no need to replace it. It’s still mighty fast, has a great camera, great battery. It’s a perfectly good device; replacing it would be a waste.

And that’s the other thing, too. It’s a waste of money to get a new phone every year, but it’s also a waste of resources (do you really need 5 iPhones sitting in their boxes, collecting dust?). It’s wasteful on the environment, and I dunno, rampant consumerism just doesn’t seem like a great thing, either. I’d love to get 5+ years out of a phone, wouldn’t you?

So, if the idea of losing a headphone jack on your phone seems unappealing to you, remember that you don’t have to buy it.

Amusing Ourselves to Death

Over the weekend I re-read Neil Postman’s fantastic Amusing Ourselves to Death, which I can’t say enough good things about. Seriously, this book is about as Jasony a book as they come, and no doubt a large influence on what makes me Jasony in the first place (previous post about the book).

If you haven’t read the book (shame on you), it’s essentially about how media shape the kinds of public discourse we have (specifically politics, current affairs, and education), and how America’s shift to a predominately television-centric country diminished its ability to have serious conversations about these issues.

Postman argues public discourse in America was founded at a time of pervasive (book) literacy. The media of print entails memory: arguments can be complex and built up over pages, chapters, and volumes; the reader must take time to think, process, and remember what they’ve read; books allow us to learn the great ideas of history and of our current society. There were (and still are) plenty of junk books, but books and print supported well-argued, serious discourse as well.

Conversely, in television we find a medium of entertainment. Like print, there is much junk content on TV, which is just fine. The problem, Postman argues, is when television tries to be serious, because it fails in spectacular ways. Television is an image-centric medium, and as such it’s impossible to have complex, rational arguments for or against anything. Think about how dreadfully boring a “talking head” is on TV news, and those usually only last for a few minutes at a time!

Where print requires you to remember, television requires you to forget. Instead of long, coherent discussion, you have a series of images strewn together which are almost meaningless. In his chapter “Now…this,” Postman looks at tv news as an example of this. Most news segments last about 60 seconds, and are placed in an incomprehensible order. A devastating mass murder, now a political gaffe, now a car recall, now unrest in the middle east, now an advertisement for retirement savings. Not to mention immediately following the news is Jeopardy.

Amusing Ourselves Today

“But Jason!” I see appearing in a thought bubble over your head, “the book was published in 1985, when television was the media in America, but these days its been displayed by app phones and the Web. Is this book still relevant in 2016?” Absolutely, unequivocally, yes.

The good news is, some software allows for interactivity and personal agency. Through email, blogs, and forums (i.e., written word), we can have complex, well-reasoned discourse (I said can). We can even improve some of the shortcomings of the printed word, by pulling in various sources via links, by including images and interactive, responsive diagrams and graphics, and by collaborating with many people around the world.

Software does not require us to sit quietly, mouth agape, awaiting amusement. But today’s software does ask us to do so, relentlessly.

Much of what we do with app phones is largely incoherent. I’ll read an email from a friend, now I’ll check twitter, now I’ll check Instagram, now I’ll write some code. And too often, even just within one of these apps it’s all incoherent. First, remember that for the overwhelmingly large majority of software users, today’s social software is “what you do” with a computer or phone; Facebook is the computing experience for many people. And within an app like Facebook or Twitter or Instagram, you have a series of things strewn together in a “feed.” An article about Donald Trump, now your cousin’s baby’s 2nd birthday, now (lol) a video of this goat who faints when its scared, now hey cool an ad for Chipotle.

Or take Instagram for example. True, you’re consistently getting images, but that’s about it. There’s no space for discourse on Instagram. Image dominates, and the strongest message you can really send is a “like.” There is literally little space for discussion, and the discussion is largely irrelevant anyway. Instagram shows, it does not discuss.

Books and Beyond

My interpretation of Amusing Ourselves to Death is its thesis goes beyond books and television, and again focuses more on how media relate to discourse. It’s not to say that the printed word is some kind of ultimate medium for discourse, just that it’s presently much, much better at it than is television (and I think, most of our software, too). There’s nothing wrong with media that entertain us, the problem is when a medium only entertains us and is incapable of having cogent conversations about anything else.

That problem is just as important today as it was 30 years ago.

The Lost Art of Instant Messaging

All throughout middle school, high school, and much of university, MSN Messenger was the place for me and my friends to socialize online (if you’re my age but grew up in America, chances are you can replace MSN with AIM). MSN was an instant messaging system. You had a contact list, online / away / busy / etc statuses (with custom status messages), and usually had one-to-one chats (although you could have multiple people, too).

You knew your friends were available to chat because they had their status indicated. An “online” status meant there was a good bet if you messaged them, you’d get a response rather quickly. “Away” meant they were logged in, but probably not at their computer. “Busy” meant they were present, but didn’t really want to be disturbed. These weren’t hard and fast rules (someone could appear to be any status, but still be present anyway, and vice versa), but you generally felt a sense of presence with your contacts. You at least knew what to expect, generally, when you messaged somebody.

These days, it seems like Instant Messaging, as a concept, has largely vanished. In its place we have things like iMessage and texting (I’ll admit, I don’t have a Facebook Messenger account. Do a lot of people use this?), but we lose a lot with them. Sure, iMessage means you can send a message whenever, but you also lose the feeling of presence you got with IM.

Because there’s no concept of “online” or “away” (etc), you have no idea if the other person is available to chat at the moment. Where IM chats often felt engaging while both people were online, iMessage “conversations” feel sporadic, like a slow trickle of words back and forth. Sure, sometimes you do have bouts of back and forth messaging with iMessage, but more often than not a message is a shot in the dark (consider how gauche it is to text somebody “brb” or “gtg"). The expectation is the conversation never really ends, but in fact, it never really starts, either.

And who knows, maybe this is just me. Maybe everybody uses Facebook Messenger, or maybe everyone else just has more engaging friends they text or iMessage. I use Google Chat and literally IM with two people ever, these days. But I really miss having nice long conversations with my friends.

What about you? Do you have engaging conversations over iMessage / texting? Does everyone just use Facebook Messenger (or another IM service)? Or is it really a lost art?

Sorry Not Sorry

“You’re Canadian? You don’t have much of an accent” people tell me when they find out I’m Canadian. It’s true, I’m from New Brunswick, Canada, but I’ve never had much of an East Coast accent, and much of it has faded since I moved away from home a few years ago. I never really minded in the early years because I was a little embarrassed by it (my home region is generally considered a little backwards by the rest of Canada), but lately I feel like I’m losing a little bit of my identity because of it.

There are many telltale signs of a New Brunswick / East Coast accent. The big tell are our hard Rs (“are are harrd Rs”), though that’s common to most of the region (I correctly identified Kirby Ferguson of Everything is a Remix as an East Coaster on his hard Rs, alone). More specific to New Brunswick is our unmistakable lexicon, like “right” (pronounced “rate") to mean “very” (“it’s right cold outside”), “some” to mean “quite” (“it’s some busy at the mall”), “ugly” to mean “mad” (“she was some ugly when she heard the news, let me tell ya”). We drop suffixes (“really badly” becomes “real bad”), too. And I’m pretty sure we invented the “as fuck” intensifier (“it’s cold as fuck right now,” “I’m tired as fuck”) long before the internet caught on to it.

I took a linguistics class in university (which I highly recommend, by the way), and we learned about language extinction, that many languages are disappearing and we’re left with less and less as time goes on. I asked my teacher why this was a bad thing, but I kind of got a funny look (I meant the question genuinely, not in a rhetorical or smarmy way; at the time I didn’t really understand why a lack of diversity in language was so bad). I think I understand the general sentiment a little better now.

Since moving away from home, I’ve definitely lost much of what I had of an accent. When you’re not surrounded by speakers of your dialect, it’s feels weird using words or sounds you know will stand out to people you talk to. My Rs have softened, my “eh"s have disappeared, and even the most quintessential Canadian word has changed: my “sorry” has gone from the Canadian “soar-y” to the American “sar-y.”

It’s a weird kind of identity crisis to either sound normal to yourself, but weird to those around you or to sound weird to yourself but normal to those around you. But I’m trying to reverse course by calling it out (and by watching copious Trailer Park Boys). Though the sound of the word might change, I’ll at least always say “sorry” when I bump in to somebody—that Canadian part of me will never fade.

Mass Consumption and our Sense of Meaning

How odd is the juxtaposition between our mass consumption culture and the meaning of our lives? On the one hand, mass consumption gives us a perspective of the unlimited: there’s always more to consume, it’ll always be there, it’ll always replenish. On the other hand, our lives are inherently finite: you only get one childhood, you always figure out life too late, youth is wasted on the young, you’re going to die someday.

It’s kind of distressing to think about. Mass consumerism asks us to buy in (literally and figuratively) to the idea of limitlessness. It asks us to ignore, to not even think about, the fact that our lives are not at all limitless. There will be a new iPhone every year, the grocery store shelves will always be restocked, but I’m 27 years old and my childhood is long over and I’m never going to get another one.

Maybe it’s more comforting to think in the consumption mindset, that there will always be another book, another tv show to watch on Netflix, another hamburger to eat at McDonalds, a longer infinite list to scroll through. But it’s also really dissatisfying how little that lines up with my life, how much, in fact, it denies what my life is like. Consumerism doesn’t give me a frame of reference to make sense of my life, to understand what it means to age or to have a finite set of choices (and I bet looking at life as “a finite set of choices” only makes sense as a perspective because of consumption culture; we probably wouldn’t look at life as being limited without mass consumption as our default way of looking at the world).

I’m sure this is well covered in philosophy and I’m certainly not suggesting I’m the first person to think of, just that, jeez this sort of thing has been hitting me hard lately and I don’t know how to make sense of it.


I wanted to expand a little bit on a tweet I made the other day about aliens in science fiction movies. There’s an opportunity in these movies to explore western society’s fears about immigration amongst Earth’s peoples (immigrants referred to as aliens), but most movies don’t seem to do this.

Most movies about aliens see them as invaders and earthlings as the heroes, defending the homeland. My friend Brian pointed out to me these movies (and fears) aren’t about immigration but colonialism. The aliens aren’t looking to join us, they’re looking to conquer us. It’s a great point, and I think it matches up with fears many people hold about immigration, but I think it’s weak of screenwriters to pander to these fears instead of exploring them.

Science fiction is a lens we use to see ourselves and our current world, it’s a way to extrapolate and play “what if?” and see more sides to our lives than we currently see today. In stories like A Brave New World and Nineteen Eighty Four, fears of oppression through technology were explored, not celebrated.

But in many of today’s alien-related movies, the fears of being taken over by aliens are reinforced, not examined. We’ve got our guns and we’re the heroes, nobody’s gonna take our land from us, we say. Why don’t we have more movies where oh, I don’t know, the aliens aren’t invaders but are refugees? Or where the hero says “Wait, hold on, are we sure they’re actually invading? Shouldn’t we learn from them before we start blowing them up?” Whether or not people really do think immigrants are invaders looking to oppress us, it’s cowardly for alien films to not examine this.

There are a few good examples, though. District 9 is particularly on the nose about aliens with a refugee status; there are humans who see those aliens as invaders, but those humans are portrayed as villains. E.T. has aliens not as invaders or as refugees, but as explorers who wish to learn. True, E.T. is a visitor, but he’s also explicitly not an invader. Despite naming the titular alien a “xeno-morph,” the movie Alien is a lot more about sexual predation than it is about invasion (the face-huggers and chest-bursters are not so subtle allusions to rape and its unwanted consequences). I’ve heard good things about Alien Nation about immigration, but I can’t personally vouch for it. And I’m sure it’s explored better in science fiction literature, too.

Immigration is a vital topic to pretty much everyone on this planet, yet fears of it are pandered to and reinforced in science fiction movies all the time.

PS: Yeah, maybe actual contact with actual extra terrestrials wouldn’t go so hot. They’d almost certainly be of vastly different intelligence, technical prowess, hell, even body chemistry (microbial exchanges alone could easily destroy us). They may not be violent invaders (that’s probably a reflection on our own evolution and history than it is on theirs), but they’d definitely have arisen from some form of natural selection, originally. But movies with “alien invasions” are hardly about presenting scientific reality, and that’s OK. An alien movie where they come here and we all get alienpox and die probably isn’t telling a very good story.

PS: Yeah, it’s also problematic to have actual aliens represent humans from different countries. Showing them as wholly different, often monstrously so, reinforces views that “aliens are other” which doesn’t help anybody.

Reclaiming #NotAllMen

Today the phrase “Not All Men” (often #NotAllMen) represents something pretty terrible. When feminists speak on the internet about the patriarchy, inevitably dudes will butt in with the phrase “Not all men!” to say, “Not all men are rapists!” “Not all men wish for inequality!” etc. I won’t go into all the details of why this is problematic because many better essays have already been written, like this one or that one.

But I’d like to reclaim this expression. I want “Not all men” to mean “I don’t want this thing to only have men.” For example, the programming team I work on currently has no female developers, so I want this team to be Not All Men, but include women (and people of any gender, too.)

I want casts of movies and TV shows to be Not All Men. I want people I see at conferences to be Not All Men. I want the CEOs and people in the news to be Not All Men.

To be clear, I know there are many women (and people of all genders) currently working very hard to achieve these goals, and I support that in every way. By reclaiming this phrase, I hope we can reinforce and help what’s currently being done. I hope the phrase can act as a reminder to us all that until we see teams of Not All Men out in the world, there’s still work for all of us to be done.

Social Media Cheesecake

I’ve been thinking more about the phenomena of social media, popularity, and expectations and I’ve thought of a new metaphor:

I’ve made a cheesecake, and I’m not a professional chef, but I’ve worked really hard on this one and I’d really love to share it with everyone, because everyone loves cheesecake. But nobody wants it, because they’re stuffed from all the other cheesecake (and pies and puddings) they eat all day, everywhere.

So of course this makes me sad. I worked hard on my desert and I think it turned out great. But social media is a potluck with way too much food. And even though you’ll only really connect with people sitting directly beside and across from you, it’s a potluck you simply must attend, because there’s so much good chow.

More Thoughts on Blogs and Conversations

The following is a mishmash of thoughts following up from yesterday’s post about blogs and conversations. The real theme of today’s post is “I don’t really know what a blog is” and “that’s OK” and “blogging will probably die” and “is it just me or are these posts getting less coherent as time goes on?”

There isn’t really a strict definition for what a blog is, but it’s safe to say a blog is usually a collection posts about something, sorted by recency, and usually with some kind of way to subscribe (RSS or Atom, or these days Twitter / Facebook feeds). The form of blogs is always kind of undulating, evolving, following the people (see The Awl’s The Next Internet is TV about this).

So blogs end up less like books and more like news or other periodicals. Yeah, the blogs I’m talking about are personal blogs, not tech “news” or what you’d typically think of as a periodical, but they are based on time. You either come to a blog because you saw a link to it (where else, but on some sort of time-based stream like Facebook or Twitter), or you come to a blog to see what’s new, (maybe from a time-based RSS reader).

The medium of the blog is all about time. Thus its content is shaped around time. That’s why so many blogs posts are about current events and that’s why it feels like blogs should foster better conversations, and that’s why it’s so frustrating they really don’t.

I don’t really know what my website is all about. Maybe it’s my web diary, maybe it’s a place for public pontifications. But definitely at some level, I’m putting ideas out into the world because I care what people think about it. At some level, I want to spark something in you, the reader. I hope what I write tickles some part of your brain so you think and ideally, respond (maybe this is fundamentally manipulative, though? there’s another post idea for the future).

See also:

Blogs and Conversations

Recently I’ve been going through Patrick Dubroy’s excellent blog archives and I stumbled upon a post titled “Blogging is the hardest ‘conversation’ I’ve ever had” which really resonated with me. Pat said:

Yesterday, after writing my post in reply to Atul, Aza, and co., I was thinking about how much work it is to put together a post like that. You often hear people refer to blogs as a “conversation”, but if that’s true, it’s more work than any type of conversation I’ve ever had.

Compare it to other kinds of group conversation we can have on the internet:

  • IM, IRC, etc.
  • Twitter and FriendFeed
  • wikis (not all wikis are really conversation-friendly, but the original wiki certainly is)
  • email, discussion forums, blog comments

Writing a blog entry in response to someone else’s is far more difficult than any of those. Partly, it’s because blogging is often slightly more structured and polished than the other methods; but there’s also a lot of overhead in the actual act of writing a post.

This has definitely been my experience too. Trying to stitch together quotes and links to other blogs is incredibly tedious and error-prone. And if you use a format like Markdown, making sure you’ve got the quotes, lists, and links properly copied over is just that much harder. Everything’s so fiddly. Is it any wonder almost nobody does it?

When I started my website in 2010, I was really excited to jump in to writing on the web. There were blog conversations all over the place: Somebody would post something, then other blogs would react to it, adding their own thoughts, then the original poster would link to those reactions and respond likewise, etc. It became a whole conversation and I couldn’t wait to participate.

But I’ve never really had much of a conversation on my website. I’ve reacted to others’ posts, but I’ve never felt it reciprocated. I never felt like I was talking with anyone or anyone’s website, but more like I was spewing words out into the void. Some people definitely enjoy what I write, some agree and some even disagree with it, but the feedback has always been private, there’s never been much public conversation.

And I get it. Like Pat said, the interface to blogging doesn’t really encourage conversation, which makes blogging feel anti-social and lonely. My guess is blog comments were a way to make things feel more social, less isolated, but unless a lot of thought is put into them, comments become a total shitshow almost immediately (see Civil Comments, a promising attempt at fixing this). RSS lets readers subscribe to your posts, but you have no relationship with these people; ideally you want your readers to be peers so you can read their blogs, too.

There’s a lot of talk about the death of blogs, and it’s easy to understand why. Blogs are a lot of work to set up, they’re often fiddly to get right, people feel an urge to put out their best selves, and they have a terrible interface for being social. Not to mention how terrible writing on a touch screen is.

Luckily, there are still a few of us nuts around still writing on the web, who don’t really care if “blogs are dead” or not. But we sure could use some company.

Software Development Stage of Dread

Sometimes I’m developing on a particularly difficult task, maybe it’s a bug I can’t quite squash, or a feature I’m a little stuck on. But sometimes, when I get to that hard part, instead of hunkering down on it, my brain says “oh well, time to go see what’s on the internet!” This is the Dread stage of software development.

Between you and me, the logical part of my brain knows, yes, this is a bad path. When I encounter a hard problem, skipping off to the internet is the last thing that’s going to help me. But obviously there’s a compulsion in there that makes me do it.

This is pretty much procrastination 101, where I don’t want to do the hard thing, so I go do the easy thing instead. But I think it’s also compounded by working from home all the time: I don’t really head to Twitter to see cool links, but instead to hear from people. That’s unfortunately one of the messed up parts of Twitter: humans are mixed in with brands, and everyone seems to be linking off to something they find interesting; there never seems to be a lot of human conversation (other than impossible to follow shouting matches).

I’m not trying to excuse heading off to the internet, but I am trying to understand why I do it because I’m hoping that will help me prevent doing it.

This Dread stage only gets worse as time goes on: the less I focus on the hard problem, the harder it becomes. So the “obvious” solution is to keep a longer focus on the problem (easier said than done). But the underlying solution, I think, is to feel more engaged with the problems I’m working on. While I find working for Khan Academy to be immensely fulfilling, every app has its share of mundane bugs and features. I need to remind myself, yes, maybe this random UI bug feels pointless, but it’s in service of a greater goal (helping millions of learners have access to a free education). And it’s really hard to see that, especially when I’m a developer looking at code that could be in any app, that in fact this isn’t just a random bug, it has positive impact far beyond the bug itself.

It’s so easy to get lost in the minutiae of every day hard problems, and it’s so hard to remember, sometimes, why I bother. But I think it’s worth it in the end.

Programming is Performance Art

I heard this idea years ago (and naturally, can’t remember where), but it’s been in my mind ever since: programming is performance art. I’m not talking about the act of programming per se (although that could also be considered a performance), but that the result of programming is performance art.

Chances are, the things you and I program today won’t exist as programs in even just a few years. OS APIs, platforms, dev tools, even hardware, all continuously change, so much so that today’s apps will soon enough start to rot. It’s hard to use a piece of software unchanged for more than 5 years; more than 10 is almost impossible.

Software is not a medium that preserves itself. Old software is best preserved in writing, pictures, and movies (media whose own digital formats are still subject to rot, but it seems at least less so), but rarely can you directly execute the software itself. You can watch a video of Doug Engelbart’s oNLine System but you can’t play with the software itself (thankfully you can play with a Xerox PARC Smalltalk system, though).

There are some workarounds, but they’re rare. Writing for the web browser seems to be a good way at achieving some degree of longevity (Javascript in browsers seems to be quite stable, but maybe the dev tools aren’t). Writing and maintaining one or more layers of virtual machines seems another route, although I worry that’s just shifting the problem down a level of abstraction. I’m sure there are other solutions (ship the platform with the app?), but these are exceptions: the way software exists today is temporary.

The main way to prevent software from rotting, it seems, is to maintain it: update it so that it continues to work as the platforms supporting them change underneath. In this sense, though, it’s not the same software you started with, as it’s continuously changing. You can’t stand in the same river twice, they say.

It seems this is the way software is meant to be: a thing that exists, for a time. Software is not a book or a painting, software is a Broadway matinée or a parade. It may happen more than once, it may go by the same name, but every time it’s different.

A Refreshed Speed of Light

I’m still in meta post land, and today I wanted to briefly touch on the slight redesign of my website (if you’re reading this in a feed reader, take a sec and poke around the real site). Here’s what’s new:

  1. Boosted the type size way, way up. I’d been meaning to do this forever, but a recent essay about accessibility tipped me over the edge. Everyone can read big type, but not everyone can read small type, simple as that.

  2. At the same time, I lightened the look of the page a bit: gone is the heavy black border around the page; instead I’ve got a lighter border, which feels representative of the old look, too, without weighing the page down.

  3. Similarly, I moved the giant mast head below the first post. When you come to a post, you probably don’t give a crap the name of the site, and instead just want to start reading. If you really want to “click to go home” at the top of the page, you can still do that anyway, there’s a big invisible space at the top that’s a link to the homepage.

  4. I got rid of the responsive jazz. When I last redesigned the site, “Responsive” sites were all the rage, and I used a column based CSS framework. It was nifty, but ultimately way overdoing what is essentially a 1 column website. Now that column is centered. Finally.

  5. The site should still look great on mobile (where the design has become even lighter, and finally, Futura Condensed Extra Bold mast head on iOS!).

  6. I fixed the El Capitan bug where all the type looked bold? wtf Safari? (I would have fixed this sooner but I have yet to upgrade my machine, and I was honestly hoping Apple would have fixed the bug by now. Ah well, fixed now).

That’s essentially it. Most of the changes are relatively small (except for the type, which is relatively big), but I think it makes for a much more readable experience.

If you find any problems or have any feedback, please do let me know!

How I Write Every Day

Yesterday I talked about my guidelines for writing every day and today I want to talk about how I write every day. As I mentioned yesterday, regularity, without rigid rules, has been pretty key for me, but it wasn’t really clear to me until I gave it some thought, how to go about doing this.

In terms of physically doing the writing, I usually do it every morning before work and then publish more or less immediately after (let Twitter be your copy editor!). Writing first thing in the morning has worked really well for me because my head is mostly clear when I first wake up. I try to stay off Twitter / social networks before I get started, because they often pollute my head (sadly this is true any time of day) and make it harder to focus on what I’m trying to say.

Each posts takes me around half an hour to write, depending on how long the topic is and how much of a groove I’m in (as mentioned yesterday, this has gotten easier over time but I still struggle from time to time).

This groove is something I strive for, and it’s made easier by obsessively thinking about what I’m going to write before I start typing it out. This is your standard “literally walk around outside with the idea in your head / shower thoughts” sort of thing, but I find it helps me explore points I want to make in the post. As I’ve mentioned before there’s no real “true form” of the idea, what’s in my head and what gets written are different, but thinking about the idea before writing it definitely helps. And because I write one post per day, that means I get about one day to pick an idea and let it bounce around my head before I write about it.

The idea, which I keep in a todo list, tend to come from three primary sources:

  1. My idle thoughts while going for a walk, riding the subway, doing the dishes, or writing other posts. I tend not to listen to music or podcasts while doing these activities and instead let my time be my time (i.e., don’t kill time).

  2. Conversations with people. Jeez this is a great way to get ideas, take them from your friends! But more seriously, riffing with someone is a great way to explore ideas. (I wonder, what would a writing medium look like if was based on riffing with people?)

  3. Reactions to things I read elsewhere, be they books or posts, or industry trends (in my head, many of these posts start with “I got a lot of problems with you people!” in George Costanza’s voice). Sometimes I rant, but often seeing or reading something inspires a little nugget of an idea, which eventually grows into a post.

When I have an idea for a post, I try to write it down as soon as possible (I embarrassingly forget them sometimes) and leave any notes I can think of on the subject so I’ll have something to start with when I revisit.

That’s about all I can think of for my writing process. It’s not perfect but it’s been working well for me. Though I’m writing mainly to get the ideas out of my head, I try my best to write accurately, to not assert anything I’m unsure of, and to note when I plain just don’t know what I’m talking about. I don’t want anyone to treat my writing with authority, but I’m so glad when people like what I write. It’s the best mental exercise I’ve ever done.

If any of this sounds like fun to you I highly recommend giving it a shot, and please let me know when you do, I’d love to read it.

Writing Every Day

I’ve been writing (and publishing) every week day on my website for almost two months now and it feels incredible. And it was a lot easier than I expected. Here are the guidelines I run with:

  1. Post one thing almost every weekday.
  2. Write it when you get up in the morning, before you start work (I work from home, so that helps).
  3. Publish it when people are awake.
  4. It doesn’t matter how long or well researched it is, really (but try not to write junk).
  5. If I’m sick or on vacation or just really can’t post, don’t sweat it.
  6. Do this until I don’t want to do it anymore.

That’s basically it. I’ve been unusually consistent (for me) at this in part because I treat those as guidelines, not hard and fast rules. Normally when I set a goal for myself it’s way too ambitious, I feel overwhelmed, and I bail on it. The usual me would have said at the start “I’m going to publicly commit to publishing one post per day, every day, for the next year.” and then I would have failed after 2 weeks.

But with this project, I’m trying to be as lax as possible. I wanted to write every day because I had a backlog of ideas to write about and because it was a good motivator to get out of bed a little earlier every day. I have no real goal in mind of write for a year or anything like that, I just want to do it until I don’t want to do it anymore. That feels so much easier and less of a burden than if I’d set some big lofty goal for myself.

None of my writing I’d consider truly amazing but that isn’t really the point. The point is for me to think out loud, get the thoughts out of my head, and have fun in the process. I was worried I’d quickly run out of post ideas, but my idea list is twice as long today as it was when I started (and that’s not counting everything I’ve written about in the meantime), so there’s no real end in sight (until at least, I get to a point where I don’t want to write any of the ideas in my list).

Writing every day has made it a lot easier for me to “just write” and I think it’s made me a better writer, but I absolutely still struggle from time to time, too. Sometimes I can just crack my knuckles (ew) and crank something out and it’s awesome. But other times I’ve struggled, deleted attempt after attempt, and eventually switched topics for the day.

It’d be easy for me to say “So, I’d failed at my projects goal and instead decided to do this writing-every-day goal instead, aren’t I smart?” but in reality it only looks like that in hindsight. The two were mostly unrelated. It just so happens that writing every day has helped me get into a better habit of practice and improvement, but it wasn’t done as an alternative to my failed goal.

(Huge credit also to my friend Soroush Khanlou, who wrote a post per week in 2015, he is a major inspiration. Mine are mostly furiously written and then published, but his are thoughtful, well researched, and edited.)

On the Setting and Failing of Goals

As I said in yesterday’s post, I think it’s better to be internally, rather than externally, motivated while trying to make great work. It’s better, I think, not to worry about what others are doing and instead focus on what I’m doing as a motivator for my own stuff.

And yet, I can’t help but keep coming back to this Bret Victor Showreel of his work between 2011-2012. In just two short years, Bret created (or at least, published) a prolific amount of groundbreaking work, month after month, sometimes week after week.

I also keep thinking about this (probably apocryphal) story about making pots:

The first half of the class was to be graded based on the number of pots they could create throughout the semester. The more pots they made, the higher their final grades would be. […]

In contrast, the second half of the class was told that their grades depended on the quality of a single pot; it needed to be their best possible work. […]

At the end of the semester, [outside] artists were […] commissioned to critique the quality of the students’ work and overwhelmingly declared that the craftsmanship of the pots from the first half of the class was far superior to those of the second half.

The lesson I took from all of this was, if I wanted to make really great stuff, I have to be prolific, I have to make a lot of stuff, iterate on it, learn from it, improve it, and finish it.

So I set a goal for myself near the end of 2015: I was going to make and publish one project per month. These projects were to be mostly research prototypes of neat interfaces I’d been thinking up; I’d research them, prototype them, iterate, then write and publish a little essay at the end of each month.

It’s nearly April and you may have noticed: I have not at all succeeded at this goal. It turns out, this goal was pretty hard for me for a few reasons:

  1. Research, prototyping, iterating, and writing take a lot of time.
  2. I have a fulltime job.
  3. I enjoy spending my free time with my wife, friends, and family.
  4. I can’t seem to stay focused on things, or at the very least, I’m easily dist
  5. Finishing and shipping things, even prototype demos, is a challenge for me.

I’ve released one well-researched essay project pondering Xcode for iPad, but other than that I haven’t been too successful at my goal of making a ton of projects. I have, however, been writing a lot. But more on that tomorrow.

Motivations of Popularity

Yesterday I wrote a bit about popularity and how I deal with (the lack of) it. Today I want to dive a little deeper into why I even care about it. Despite me writing about it this week, I don’t normally spend a whole lot of time consciously thinking about popularity or being liked or well known or respected. But it obviously matters to my brain at some level.

At the core, I think it’s part of being a human: we’re innately social beings and generally speaking, that’s a good thing. It feels good to our brains to be liked, to be a part of the group, to communicate with our friends, and, I suspect, our enemies, too.

Today’s online “social networks” definitely exploit this though. We’ve had this innate social ability for hundreds of thousands of years, and suddenly things like Facebook show up and majorly amplify our social tendencies to an extreme degree, and that makes us behave strangely.

What used to be a joke told to a physically present group of friends is now shared with hundreds of people on Twitter. Where I might expect a few in-person chuckles over the span of several seconds before, on Twitter I feverishly refresh to see if anyone has “hearted” or retweeted my quip. Did anyone like it? Does anyone think I’m funny?

Maybe I’m more socially obsessed than I’d realized. But I feel like today’s online social networks severely subvert what it means for humans to be social, in ways we haven’t adapted to yet.

(See also danah boyd:

i started wondering if social media is dangerous. Here’s what i’m thinking.

If gossip is too delicious to turn your back on and Flickr, Bloglines, Xanga, Facebook, etc. provide you with an infinite stream of gossip, you’ll tune in. Yet, the reason that gossip is in your genes is because it’s the human equivalent to grooming. By sharing and receiving gossip, you build a social bond between another human. Yet, what happens when the computer is providing you that gossip asynchronously? I doubt i’m building a meaningful relationship with you when i read your MySpace CuteKitten78. You don’t even know that i’m watching your life. Are you really going to be there when i need you?


Kottke recently linked to this video about creating and popularity that I really enjoyed:

Adam Westbrook talks about Vincent van Gogh and the benefit of doing creative work without the audience in mind.

It’s a wonderful video discussing van Gogh’s prolific work, even when nobody was buying his work. Westbrook argues van Gogh wasn’t motivated by onlookers or social success, but was instead motivated by autotelic goals:

Mihaly Csikszentmihalyi describes people who are internally driven, and as such may exhibit a sense of purpose and curiosity, as autotelic. This determination is an exclusive difference from being externally driven, where things such as comfort, money, power, or fame are the motivating force.

The video doesn’t really address today’s social landscape. Yes, van Gogh theoretically could have had a physically close social group (or a distant social group, as with his brother), but he couldn’t have had a social group with thousands of people like we have today. He wouldn’t have seen likes and favs and retweets whirl by him every day, and he wouldn’t have felt the same social pressures we have today, either.

I think internal motivation is ideal, and it’s something I strive for myself (make awesome shit that I’m proud of, and don’t care so much what others think), but I think it’s unfair to feel bad about caring what others think, too. I also think it’s important we examine why we feel so socially overwhelmed online these days, too (or at least, why I feel that way; I don’t wanna drag anyone else in with me), and that we demand better from social networks like Facebook and Twitter (like, for example, the work of Joe Edelman).