Wil Shipley on the HTC Lawsuits. Wil Shipley writes an open letter to Steve Jobs about Apple's recent mutli-patent lawsuit against HTC.

Enforcing patents is wrong. You’ve famously taken and built on ideas from your competitors, as have I, as we should, as great artists do. Why is what HTC has done worse? Whether an idea was patented doesn’t change the morality of copying it, it only changes the ability to sue.

Apple has been on [both] [1] [sides] [2] of this before.

[1]: http://en.wikipedia.org/wiki/Xerox_PARC#Adoption_by_Apple “Big mistake” [2]: http://en.wikipedia.org/wiki/Apple_Computer,_Inc._v._Microsoft_Corporation “Bigger mistake”

Speed of Light

Is the Ipad Just a Big iPhone?. Cameron Daigle gets it:

This is an absolutely essential lesson for app developers. Applications that think the 20% Rule still applies in the streamlined, intuitive world of the iPad will find themselves woefully outdesigned by folks who Get It.

Engineer Thinking. Matt Gemmell:

The existence of uncertainty is not an excuse for exposing it to the user.

I constantly find myself expressing the above point to my developer friends. There is a certain niche market for the tinkerer and his software, but 99% of the time, we write for those terrified of using computers – let alone tinkering with them.

Put another way, how many different kinds of peanut butter need there be on the grocery shelf?

State of Panic. It's stuff like this which makes me get out of bed every morning. Presently, I can only dream of doing something so wonderful.

The iPad Paradox. Mike Elgan of Macworld on consumer reaction to “feature-poor” products:

A strange trend has emerged that violates the more-is-better ethos of American consumer culture. Some products and services are touting limitations as desirable “features.” And consumers are loving it.

I think Elgan is missing the point somewhat. It's not so much people love iPhone due to what it doesn't do; they love it due to how well it does what it does.

Why the Wait?

Apple typically makes their products available for sale within days of their announcement (a typical example being the new iPod nanos, announced Sept 9th, available for sale just days later; atypical example being the iPhone, announced January 9th, 2007, available six months later). Similar to iPhone in 2007, Apple announced the iPad in late January, to go on sale some “Sixty days” later.

A friend recently asked “Why the wait?”, and my initial, glib response, was to stave off would-be netbook owners from shopping elsewhere while Apple spit-shined iPad for a late March release.

After letting the response percolate in my head for a days more, I'm beginning to feel there's more to it.

(Digression: there was a minor stir in the Apple-web following the announcment of iPad's April 3rd availability; Sixty days from January 28th is March 29th — a Monday. This lead to the conclusion Apple is late with iPad, either from hardware or software delays. My speculation is neither were delayed, and “Sixty Days” was a ball-park estimate; if the release date had been set, Apple would have said “March 29th” from the start).

Which leads me to why the sixty days between unveiling and for-saling: generating buzz, marketing, and developers.

Apple is touting iPad as a “breakthrough device”, Apple detractors are touting iPad as a “big iPod touch”, and Apple supporters are touting iPad as “for your mom”. Whatever you consider iPad to be, it's important for Apple to figure this out. I have a feeling even Steve Jobs doesn't quite know how to classify it. Also important is getting the word out to the masses, including Your Mom. She's not going to know about iPad the day it's announced, but you can be sure, Apple-bearing, she'll know about it come release day.

And like iPhone before it, iPad made its real public debut during the Oscars, with the premier of the first television advertisement. This is exactly what Apple did in 2007 with iPhone, the only difference is Steve Jobs was, himself, in attendance. It targets the perfect audience for the device (read: Normal People), showing them precisely what problems the iPad solves (my gut tells me iBooks will be a must-have for Normal People), and sets the stage for the device's release. Apple wants to get people's mouths watering.

But of course the iPad, with its iPhone OS, would be mocked heavily were it lacking the third-party apps which made iPhone+iPod touch so successful. And while Apple lets these existing apps to run “out of the box” on iPad, the experience seems sub-par at best. The iPad really demands apps designed specifically for its larger display and form factor, and those take time. So wisely, Apple made available the iPad SDK the same day the device was announced. Though Apple hasn't unveiled its plan for when these apps become available, one can imagine it will be at least, you guessed it: sixty days.

If you think of iPhones OS devices more like Game consoles or even desktop operating systems, Apple's direction makes much more sense. Apple sees iPhone OS as a platform, and it's important to have third-party support.

What We Believe In

I believe in a website with articles, and not a “blog with posts”. I believe in creating an exciting place to share my writing, to explore my journalist side. I believe calling any website a name which starts with the sound “blaw” is a serious disservice to the content from the get-go. I believe content is king.

I believe in no comments on articles, as I've never seen comments on any site provide a positive contribution to the content, and the content is king. I believe in well-thought-out, civil, and direct discussion through emails and replies on other websites.

I believe in providing content in as many accessible ways as possible, including valid HTML5, CSS, and ATOM feeds, with full content, because content is king. I believe the reader should enjoy the content how he or she pleases. I believe in making the website easily readable, with keyboard shortcuts (h/j/k/l/r). I believe in never truncating the ATOM feed for those who choose to read it, and I believe in adding one extra goodie for those who visit the actual site (and I believe those who discover the goodie should not talk amongst themselves).

I believe in Futura, Gotham Condensed Extra-Bold, Helvetica, and Menlo. I do not believe in Arial.

I do not believe in SHARE ME links to Digg, Reddit, StumbleUpon, and Twitter. I believe my readers are smart enough to do that themselves if they so choose, and I believe my readers deserve to be treated as readers, not traffic-traffickers. I do not believe in FEEDBACK tabs floating on, and obscuring, content, because I believe content is king.

I do not believe in writing junk just to get pageviews. I believe in writing great content to attract great readers. I believe in linking to things I find interesting, and to writing extended pieces whenever I find the time.

I believe you will enjoy this website.

Ode to the Explorers

That's you. You're an explorer. Yes, you, the one reading this. An explorer. You're amongst the very first to conquer this markup.

Thank you.

There may be some bumps on the way; that's part of being an explorer. You might find pages not working. You might find feeds not validating. You might find links broken. You shouldn't find any of these, but if you do, please report it back to your dearest, narrator.

I would also appreciate discreteness during these initial explorations. The domain of this land has not yet been finalized, and as master of this domain, I'd hate for word to get out prematurely. So for now at least, please don't share this site with outsiders.

You should be able to subscribe to the site with any modern feed reader. Most browsers will even detect the feed (except, oddly, Chrome on the Mac).

You should be able to read the website in any modern browser, even a mobile one. I'm interested to find out how it renders on Android browsers, specifically the typography (if it looks like ass, let me know. Screenshots would be nice either way). I've taken no special measures to ensure this site renders properly in anything except WebKit and a few trials in FireFox (but even then I don't really care). I only really care if it renders poorly in YOUR browser, because you are a close friend of mine and your opinion matters to me. So do complain if you feel the need.

You can navigate up and down through the articles with j/k for you vi-tards out there (I love you). Use h/l to go forward and backward through the pages of articles (which causes a slight glitch when you use a keyboard shortcut to go to your browser's address bar…). Press r to reload the current page.

Presently there are about 5 articles per page. Eventually I will bump this number way up, because I hate pagination. But it's low now just to test the pagination mechanics.

A few known issues: most of the other pages on the site are fairly empty, there's a broken image link in one of them. If you read the site until the end of the articles, you can still go back further and further in back-issues without actually seeing any content. I could fix it, I've been busy.

Feedback is always appreciated. Thanks for being the first to help me test this website.

The Finder Is Dead. Sachin Agarwal kind of gets it:

When you launch iTunes, you see your music. When you launch iPhoto, you see your photos. When you launch Mail, you see your email. Where is it all stored? Who cares. Apple stores these files on your Mac in a folder or “package” that isn't meant to be examined or manipulated.


Apple used this as the de facto model for the iPhone. Each application has its own sandbox of files and data. The user isn't aware of or troubled by the concept of files or storage.

It's important to note while this is a great idea, tearing off legacy cruft (even if said cruft is merely mental) is horrendously difficult. Models have long been established mentally of “Files and Folders and Desktops”, and while that hasn't stopped Apple in the past, I'm unsure they're about to suddenly flip the Mac on its head.

iPhone OS (including iPad OS) was a fresh start, and I imagine they will continue with their mobile OS as their primary focus for some time to come.

As much as I'd love to see user-facing files-and-folders dumped, I just don't think it's safe to say it's almost here on the Mac.

America's Education System Doesn't Make the Grade. Katherine Mangu-Ward claims the American school system is failing students:

Smart kids are bored, and slower kids are left behind. Anxiety about standardized tests is high, and scores are consistently low. National surveys find that parents despair over the quality of education in the United States – and they're right to [sic], as test results confirm again and again.

The solution, is thus, move everything online!

However, it's time to take online education seriously – because we've tried everything else.

I don't disapprove of moving education to a more digital system, as anyone still clinging to deadtrees is only kidding themselves, but it seems unwise to do so solely because “we can't think of anything better”. I, for one, when faced with a quandary first seek out existing solutions. Seeing as America is not the only nation on this green earth, and seeing as America is not Number 1 in education, perhaps it would be a good idea to look to Number 1 for inspiration (it's difficult to definitively rank a country “best at education” but all signs point to Europe). Surely those highly ranked countries are doing something right.

Engadget's iPad Review. Thorough review of iPad from a geeky perspective. The only part I don't like:

Currently, there is a web standard called Flash, developed by a company named Adobe, which allows for the easy insertion of rich media into webpages. That's everything from streaming video and audio files, online gaming, to entire websites made using its broad and deep development tools.

Flash is not in any way a standard. It might be the current de facto choice among websites described, but so far as being any sort of web or ISO-type standard, nosir.

While it may frustrate some, I'll just say the review page crashed my browser when I was watching the video. And the video was in Flash.

Things for iPad. Brilliant. See if you can figure out the subtle genius of it.

Surprise. Cloudy day of gloom:
awoken from my slumber
it’s name is iPad

The iPad Review

Shortly after purchasing my first iPhone in 2007, I proclaimed the it was the best computer I'd ever used. It was sleek, smooth, and quick. More importantly, it had a fresh UI and a new take on how computer interaction should work. The concept of application was diminished in favor of tasks. Instead of copying and pasting links or phone numbers or photos, one simply told the iPhone to “email this” or “call this number”. My “A-ha!” moment was while visiting a new city, I could email the friends I was visiting, while listening to music, all the while finding directions to their house through the built-in address book and Google Maps applications. This was surely how computers were supposed to work.

And now, not three years later, Apple has introduced a new player to the personal computer game. I've just spent the last two days using an iPad (which arrived via UPS…never again), and I believe it is going to be a game changing device.


A crucial part of any new Apple product relationship. The iPad unboxing is fairly obvious (that is, no surprises yet still entirely delightful), although the box is unusually thick given its contents. While Apple's recent iPhone and iPod boxes have been just short of anorexic, the iPad box is unusually plump. It's not for extra goodies in the box — it's just extra empty space. My guess is this was done so the iPad box could be used as a makeshift prop or stand until a proper one is purchased, but this could be entirely off-base.

iPad ships with a nearly full battery charge (more on that later) but as soon as it is turned on for the first time, the “Connect to iTunes” icon appears onscreen. The device cannot be used until it is tethered via USB to iTunes and activated. While this is a quick and mostly clickless affair, it still seems quite unnecessary. Once activated, you are free to disconnect it and use it as you see fit, or patiently wait for iTunes to sync your music, photos, movies, contact, email, and address book info over to the device. It's a fairly quick process given it uses USB, but it is still cumbersome.

After using an iPhone for a few years, and now an iPad, one begins to wonder why this process of tethering even has to exist. iPad uses a fairly speedy 802.11n networking chip, so instead of the initial experience of the device being marred by a “plug me in!” icon, followed by a sync, why not just do this all over the air? Let me use my iPad straight out of the box. Let me configure a pairing with my local iTunes library, and start syncing that over my wifi network. Sure it will be slower than USB, but it will be asynchronous and this way, I can immediately start enjoying the device, all the while enjoying my media as it arrives. From my couch. It's not a deal-breaker by any means, but dealing with hardware syncs to iTunes can be grating after a while (iTunes rant notwithstanding).


Next comes the hardware, which you won't even notice unless you explicitly think to do so. To call the hardware understated is still to say too much (I've named mine “Monolith” if that's any indication). That there is little to the design is likely a testament to how much it was indeed designed — there is nothing here which does not absolutely need to be.

The device has a certain Kubrickan quality about it, something futuristic and etherial yet natural and obvious. Fingers glide over the glassy canvas with ease while the brushed aluminum surface of the underside provides the perfect grip (I will always prefer the aluminum-style underside of the iPad and original iPhone to the plastic coating of newer iPhones).


The battery of this device has so far exceeded my expectations. I have been using the device thoroughly since receiving it, and have yet to charge it once— with over 30% charge remaining!  My guess is one could reasonably use the device for five days without needing to recharge. Sounds great for those intending to purchase the device and leave it on their coffee tables.


The screen is large and bright, though smaller than expected, though I think it is the perfect size as is. One thing I would like to see improved is the resolution of the display. At 133 ppi, it certainly is not bad, but it does not quite compare to that of the iPhone at 163 ppi (and doesn't even come close to the Nexus One, which I believe is over 9000 or something).


The one major point about the software is just how incredibly zippy it really feels. It is FAST. Not just faster-than-you-thought-possible-on-a-device fast, but why-isn't-my-computer-this-fast fast. Maybe you can chalk it up to the so-called no multitasking “limitation”, great hardware and software or just good old great engineering. But whatever the reason, this thing really flies.

I've especially noticed this while using Safari. While others have said Safari is the only app on the device that feels slow, due to rendering speeds, I find it incredibly fast, even when compared to my MacBook Pro. I believe this is due in part because there are not other apps or plugins bogging pages down, and more importantly the interaction with the page just feels more fluent. Mobile Safari is known for revealing a checkerboard pattern in place of actual rendered web content while scrolling, at times. This is because that content has not yet been rendered in memory, and the checkerboard appears in its place. Rendering is a hefty job, especially for a mobile device, but the checkerboard does more than just act as placeholder — it allows the page to continue scrolling under your finger even if there's nothing to display yet. This may feel like an illusion to some, but it creates a far more compelling user interface than a browser which blocks the UI to render content before scrolling to it (this is why Safari on my aging iPhone 3G still feels much smoother than the browser in the Nexus One).

iPad runs a modified version of the iphone OS, version 3.2 for those keeping score. While it uses this same Cocoa Touch libraries and SDK as its little brother, iPad introduces a slew of new interface elements and interaction models. There is a much enhanced support for multitouch gestures, such as pressing, long pressing, pinching and so on. These are used throughout the system, although somewhat inconsistently. For example, pinching outwardly on a photo in the photos app will zoom in further on that photo, snapping it into the photo detail view, while pinching the other way will zoom out to the photo thumbnail view. The same gesture may be used when looking at the tabs of Safari, one can zoom in by pinching. However, if one attempts to zoom in to the album art in the iPod app, nothing happens. Zooming out from the full screen album art mode is supported, but not zooming in. It's not a major issue, but given how difficult gestures are to learn (they are, on their own, undiscoverable), it would make sense to at least use them consistently in order to attain some trust with the interface.

The iPod app is unsurprising to most, which is really an injustice to the application as it's really fantastic, just obvious. It basically feels like a modernized version of iTunes, running on a mobile device. The only somewhat disappointing part of the iPod app is its sheer lack of CoverFlow (I never thought I'd say that), as the gorgeous screen pretty much begs to have your albums strewn across it. Alas the app works and it works very well. It gets the job done and it stays out of your way.


The iPad’s homescreen looks gorgeous no matter how you look at it. Wallpapers look absolutely stunning (thanks to the IPS display no doubt). There is much more space on the iPad’s screen than say, the iPhone, so application icons are much further spaced apart. This is generally OK until the device is rotated to or from widescreen: In portrait mode, there are four icons in a row; in landscape mode there are five icons in a row. This means when the device is rotated, one icon per row gets shuffled around. This creates a barrier for spatiality, as now the user has to maintain two spatial images of where icons should appear on the screen. This is one of the common gripes of the Mac OS X Dock (as it shrinks and expands from the center as apps are launched and quit), and it’s unfortunate to see the problem show up here as well. Lukas Mathis writes more on the subject here.

One concept heavily used throughout the software is screen rotation. That is, no matter which way the device is held, the content still appears to you in an upright sense (there is also, thankfully, a toggle switch to lock the screen to the current rotation—a necessity for reading in bed or other such tasks). All the built in applications support this method of rotation, along with the operating system itself (for example, the Lock screen). Apple recommends all developers of iPad apps support rotation likewise, unless it doesn't make sense to do so (in the case of some games, widescreen might be the only sensible orientation), going so far as to reject applications not adhering to the guideline.  Suffice it to say, iPad's hardware and software were both meant to be used in an orientation agnostic way.


The onscreen keyboard of the iPad is quite a lot like that of the iPhone keyboard, except it has been expanded to better fit its newfound screen real-estate. While it will never compare to a physical keyboard with tactile feedback, I’ve found typing on it to be quite enjoyable. Typing is quick and generally quite accurate when paired with the built-in autocorrect and spell checking system.

The system-wide spell checking, which may not seem like much, has become indispensable after just two days use. Misspelled words are underlined with those famous red dots, and one must simply tap the word to select it and choose a replacement spelling. Simple, fluid, functional.

One thing I'm having a bit of an adjustment with the keyboard is the arrangement of the keys. As the keyboard is much larger than iPhone's keyboard, the main interface is afforded some extra keys (mainly a comma key, and an extra shift key) in addition to the delete key being moved to the upper row where it belongs (it was previously on the bottom row due to space constraints).  The only trouble this new arrangement is causing me is just battling with muscle memory from the iPhone keyboard. Otherwise, the two are quite similar.

One minor annoyance so far has been my right pinky-finger's tendency to rest upon the right Shift key, thereby engaging it. Not a fault of the system so much as it is a fault of my improper typist skills.

An inconsistency I've noticed with the comma and period keys on the primary keyboard display is if one long presses (that is, clicks and holds) the comma key, a small popover menu is engaged, revealing an auxiliary key (in this case an apostrophe), which saves one from needing to visit the secondary keyboard menu. However, the period on the main keyboard menu does not follow suit. Long pressing on the secondary keyboard's period button, however, reveals the ellipsis (…) character. It's a baffling inconsistency, albeit only a minor annoyance.

The iPad's autocorrect now not only will correct typos but also common spelling errors as well. If one then taps the delete key, the original spelling will pop over, allowing you to override the correction.

One more thing I have to add about the onscreen keyboard is the undo button has proven frustratingly easy to hit at times, resulting in the same paragraph being lost twice in a row (and much cussing on my part). That aside, the undo button on the keyboard will likely be a welcome addition to those previously unaware of the device’s built-in undo abilities (which involves shaking the device, and is as difficult and awkward as you might imagine).

The Magical Device

The most common comment I’ve heard time after time in regards to iPad is “It’s just a big iPod touch”. After using the device for a few days, I can confidently turn this around and say the “iPod touch is Just a little iPad”. iPhone OS introduced some great new user interface paradigms, thanks to its extensive use of multitouch. It’s as if iPhone OS was a small fish swimming in a bowl, but has now suddenly been released into a much larger pond — and it is quickly expanding itself to the new environment.

As I said at the beginning of the review, I found the iPhone to be the best computer I had ever used at the time. I think it is safe to say this torch has been passed to iPad. iPad is not a replacement for iPhone or any other such smartphone. iPad is what the personal computer should have looked like all along. It represents a continuing shift in computer interaction, first introduced by iPhone in 2007. People (and I'm generalizing here) don't want files and folders. People don't want to click and right-click and double-click. People don't want to use computers. iPad does a phenomenal job of disappearing while in use, and I think this will lead the way in the years to come.

Sent from my iPad.

Adobe states the obvious. Here's an idea: instead of whining about the changing market, why not change with it? Adobe already has a great brand for the content creation market, so why can't they dominate that market with HTML5 tools, instead of Flash tools?.

Best advice I've heard in a while. Marco Arment:

You do face your gadgets screen-side-in when you put them in pockets or bags, right? Always do that. That way, when you run into the corner of a table with an iPhone in your pocket, you’re denting the back of it at worst, not cracking the screen.

I had never even considered this before, but it makes so much sense.

Apple awarded patents for Industrial Design of iPhone and iPod touch. I find this tremendously sad:

On April 13, 2010, the US Patent and Trademark Office officially published two newly granted industrial design patents for Apple Inc. covering the iPhone 3GS and iPod Touch.

For two reasons: 1. That the USPTO even awards patents to protect the industrial design of a product; 2. That Apple feels the need to protect their products with such patents.

People are going to copy. It's how progress works. If we weren't allowed to be inspired by prior art, we wouldn't be creating anything too exciting these days. Apple does it all the time.

Thoughts on Thoughts on Flash

After much attention in the media recently about Apple's insistence on keeping Flash off the iPhone, iPod touch and now iPad, and the recent changes to its iPhone SDK licensing terms prohibiting the use of Flash-to-iPhone compiled apps, Steve Jobs has published his thoughts on the matter on Apple's website. His essay was concise, thorough, and — of course — sparked much controversy across the web.

His essay focused on the six following points:

  1. Openness: Flash is proprietary and completely controlled by Adobe, while conversely HTML5, JS and CSS are maintained by a standards body. Adobe calls all the shots on Flash, while no single entity calls the shots on HTML5.

  2. The Full Web: Adobe's claims iPhone OS users are missing out on a full web experience are largely inaccurate, as most of the web now offers its video in the H264 format supported by iPhone OS.

  3. Reliability: Flash is the number one reason Mac apps crash. A full version of Flash has yet to be released for a mobile device, and the performance just isn't acceptable yet.

  4. Battery life: iPhone OS devices offer hardware acceleration for playing H264 content thus dramatically increasing battery life for playback. Flash offers no such benefits.

  5. Touch: Flash websites were designed for keyboard and mouse, and must be rethought for touch-screen devices (personally I find this kind of a weak argument as it can be said none of the HTML websites were originally designed with touch in mind, either). Jobs cites “mouse roll-overs” as an interface paradigm used frequently in Flash which has no real analogue for touch-screen interfaces.

The first five arguments were really just aimed at revealing the ugly truths of Flash while his final argument is the prestige: Apple does not want to allow a third-party toolkit to have any kind of leverage over Apple's platform.

The third party may not adopt enhancements from one platform unless they are available on all of their supported platforms. Hence developers only have access to the lowest common denominator set of features. Again, we cannot accept an outcome where developers are blocked from using our innovations and enhancements because they are not available on our competitor's platforms.

No matter how selfish Apple might sound for repeatedly blocking Flash from iPhone OS, it makes a great deal of sense. Apple is in an all-or-nothing battle to control the burgeoning mobile market and they aren't letting anything or anyone impede their growth. They iterate on the iPhone OS roughly once yearly, and cannot afford to have any delays out of their control. A widely-used Flash toolkit could very well pose such a threat.

As with anything Steve seems to say, there was an explosion of coverage on the web, everybody from gadget sites to CEOs to the Wall Street Journal offering their takes on what he had to say.


One major claim against Steve's essay was how hypocritical the whole thing was, given Apple's long history of proprietary software. Thom Holwerda of OSNews has this to say in reaction to the essay:

Jobs' letter is incredibly two-faced, hypocritical, and very misleading. It's clearly a marketing trick to pull the wool over the eyes of consumers, and while that's okay (they're in it to make money, after all), it's our job to remove that wool from our eyes. Just as we geeks immediately understand Microsoft's ulterior motive in licensing patents to Linux/Android vendors, we should not just accept Jobs' words either.

Holwerda's three main qualms are H264, Carbon, and iTunes on Windows.

Concerning H264:

H264 is no better than Flash. This video codec is proprietary and patented up the wazzoo, and therefore, wholly incompatible with the very concept of an open standard. To make matters much, much worse, the licensing body that oversees H264, the MPEG-LA, has stated in no uncertain terms that they will not hesitate to sue ordinary users for using the video codec.

H264 may very well be patented, but it is also overseen by working-groups (such as MPEG and VCEG), is an ISO standard. Jobs' point was not that H264 is more open than Flash (though arguably it is), but that it performs better than Flash, which is crucial for devices such as iPhone. Claims of openness in his essay were centred around HTML5 technologies, not the H264 codec itself.

Holwerda also calls hypocrisy when Jobs mentions Adobe's failure to quickly adopt the Mac OS X platform with Cocoa. Jobs:

For example, although Mac OS X has been shipping for almost 10 years now, Adobe just adopted it fully (Cocoa) two weeks ago when they shipped CS5. Adobe was the last major third party developer to fully adopt Mac OS X.

Many have been quick to point out until Snow Leopard, the Finder was written in Carbon, and to this day iTunes remains a Carbon app. While this is of course true, it's difficult to proclaim “Apple isn't fully behind Mac OS X and Cocoa” with a straight face.

That iTunes has not been upgraded to a Cocoa version is a vindication of another point made by Jobs:

We know from painful experience that letting a third party layer of software come between the platform and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform.

Carbon is not a third-party toolkit, but that is written largely in C makes it more keen to being used in cross-platform applications. As such, iTunes is written in Carbon to ease the process of maintaining Mac and Windows versions, much to the detriment of its user interface and stability. iTunes is a cross-platform application with arguably a lowest-common-denominator-subpar-experience on either platform. iTunes developers must work extra hard to ensure compatibility with both platforms, causing the app to miss out on many of the advanced features of the Mac OS.

Engadget's Response

Engadget seems to maintain a somewhat delusional stance on Flash. While noting its many faults, they also fault the iPhone OS devices for not playing Flash. They seem to confuse ubiquitous with high quality. Some priceless quotes:

The “full web.” Steve hits back at Adobe's claim of Apple devices missing out on “the full web,” with an age-old argument (YouTube) aided by the numerous new sources that have started providing video to the iPhone and iPad in HTML5 or app form like CBS, Netflix, and Facebook. Oh, and as for flash games? “50,000 games and entertainment titles on the App Store, and many of them are free.” If we were keeping score we'd still call this a point for Adobe.

Wait, what?

Reliability, security and performance. Steve hits on the usual “Flash is the number one reason Macs crash,” but adds another great point on top of this: “We have routinely asked Adobe to show us Flash performing well on a mobile device, any mobile device, for a few years now. We have never seen it.” You've got us there, Steve, but surely your magical A4 chip could solve all this?

Sounds totally sincere, guys.

Microsoft's Response

This was a bit of a shocker. Microsoft is behind H264 exclusively in Internet Explorer 9 for HTML5 video:

The future of the web is HTML5. Microsoft is deeply engaged in the HTML5 process with the W3C. HTML5 will be very important in advancing rich, interactive web applications and site design. The HTML5 specification describes video support without specifying a particular video format. We think H.264 is an excellent format. In its HTML5 support, IE9 will support playback of H.264 video only.

I'm elated to hear that, though it makes me wonder how Silverlight plays into the picture.


Perhaps least surprising of all, Adobe CEO Shantanu Narayen's reaction, in an interview with the Wall Street Journal, seemed to dodge most of the points Jobs made in his essay. Some key quotes:

Our vision is, and what we are hearing from customers is that they would like us to deliver a toolset and a delivery mechanism that allows us to help them amortize their investment across multiple devices.

He talks a lot about this vision many times in the interview, to the extent where it seems less of an answer and more of an advertisement.

On claims of Flash not delivering on mobile devices:

Flash Player 10.1 will deliver on this vision of delivering on multiple devices, and so June 17th it will be available, by June 17th it will be available for multiple devices and actually on the Android device at Google IO, we will be shipping. A public beta.

He snuck that last bit in rather nonchalantly.

If Flash are [sic] the number one reason that Macs crash, which I'm not aware of, it has as much to do with the Apple operating system.

He apparently has never loaded Flash in a browser. Ever.

When prompted on Flash's purported poor battery life on mobile devices:

That is patently false. When you have hardware acceleration given to Flash, which certain platforms have given us the ability to do, we have demonstrated that it takes less battery power than on the Mac. For every one of these allegations made, there is proprietary lock-in that prevents us from delivering the kind of innovation that customers want.

This is I think the most legitimate response he gave in the interview. Up until Mac OS X 10.6.3, which was released early this Spring, Flash had no access to hardware acceleration for its graphics, so when comparing its performance vs H264 on the Mac, it was at a bit of a handicap. He says, however, they are remedying that.

Apple just recently provided us with hardware acceleration [on the Mac], we have deployed a version of Flash Player beta, it's called Gala, that now takes advantage of that hardware acceleration, that refutes all of these [claims of performance and stability woes].

He returns to Adobe's “vision”.

Let me ask you [WSJ], as a creator of among the world's best content, would you like these multiple 'stovepipe' workflows?

It should be noted this video was transcribed from the video playing on my Flash-less iPad. So the WSJ seems to be just fine with 'stovepipe' workflows.

Finally, he ends with this gem, which will almost certainly come back to bite him:

Open systems have always triumphed. We've seen that over and over again.

Please believe.

Testing One Two Three

Just finished writing a new app for publishing articles for my website. You should expect a lot more content here from now on.

OK that is all.

Move over MarsEdit. MarsEdit isn't the only article-writing software being released today.

I've just pushed my teeny-tiny Editor.app to GitHub so you can look at the source and have a laugh at it. Really, there's not much there. And it's not terribly useful unless you run the same website software I do. And you don't run it, because I wrote it and haven't given it away (yet).

But hey, if you get bored and feel like fixing bugs for me, have at it!

"Nevermind" baby works for Obama Poster Guy. Jason Kottke:

Yo dawg, I herd you like pop culture, so I put some pop culture in your pop culture so your brain can fucking explode from all the popular you've cultured.

A Sobering analysis of the current Android marketshare

From the NPD press release issued this morning:

First quarter 2010 information from The NPD Group's Mobile Phone Track reveals a shift in the smartphone market, as Android OS edged out Apple's OS for the number-two position behind RIM

NPD's wireless market research reveals that based on unit sales to consumers last quarter the Android operating system moved into second position at 28 percent behind RIM's OS (36 percent) and ahead of Apple's OS (21 percent).

To be explicit, for the first quarter of 2010, Android was sold on 28 per cent of smartphones in the United States. This puts its sales directly between the reigning Blackberry OS and the former “second place” iPhone OS.

This is very impressive in its own right and I think the Android platform ought to be congratulated for its performance. Better still, it lights a major fire under Apple's ass, and the old saying comes to mind: companies compete, customers win. So if nothing else, this should mean better products no matter what platform you choose to support.

It is important to note, however, raw sales data does not equal exact market share. Android is clearly trending upward, and if this trend continues it will naturally chisel away at Apple and RIM's OS market share.

A Dose of Zealotry

As with any news even remotely belittling Apple, it is immediately treated as libel by some of the pro-Apple press (the zealots, while few, are certainly vociferous). Here's one choice gem from the MacStories' reaction to NPD's press release:

Suggesting that Android holds 28% of the smartphone market compared to the iPhone’s measly 21%, things are looking great for Google and friends right? Well considering the iPhone is only on one carrier in the United States (and produced by a single manufacturer), compared to the four big carriers Android is on and multiple manufacturers, why hasn’t Android captivated more than 30% of the marketplace? Heck, they should be up there with RIM by now if their phones were really that good.

I personally don’t think the numbers are all that impressive (if we pretend for a moment that NPD’s numbers are actually reliable). Let’s look at the Android phone world as a whole – we have dozens of devices out there, compared to Apple’s main three (3GS, 3G, and 2G).

NPD's market research (like many other firms) is not an exact science, as device manufacturers rarely reveal exact sales data, and it is often difficult to gage actual market share information (browsing usage is one metric, and Apple does lead in this metric). But NPD's data is generally quite highly regarded among market research firms. Were Apple leading in the survey, would MacStories also be discrediting the data?

So while NPD does not offer precise-down-to-the-sale data, they are quite competent at measuring general trends in the market.

The argument against NPD's data is that they are somehow less valid because Android phones are offered on more carriers and there are more devices than iPhone devices for sale in the US. It is completely true Android OS is available on more carriers in the US than iPhone, but this hardly undermines the data. It's just a fact, whether it seems fair or not to the zealots.

Cold Shower

While Android OS is becoming great competition for iPhone OS I hardly think this is the beginning of any doom for Apple's platform. There are some facts to remember when considering this:

  1. There are new iPhones every June or July, and this trend has repeated for the last few years. Savvy consumers (those who understand the difference between iPhone OS and Android, at least) are well aware of this schedule, and as a result iPhone sales have generally slowed down in the months before a new device is unveiled (*ahem* properly).

  2. Apple already has a burgeoning developer community with over 200 000 applications in their App Store already. Android's marketplace has continued to make strides and has also been expanding rapidly, but not at the same growth-rate of the App Store. Platform strength is extremely important (*cf.* Windows in the 1990s).

  3. Tangential to no. 2, applications developed for an iPhone work exactly the same on any current iPhone ever sold, and any iPod touch ever sold. There are something like 100M of them worldwide. While that not be the market leader, consider the same application will run identically on the devices. Aside from the cellular chip in the iPhones, iPod touches are essentially the same device. There is no consideration given as to whether the device will have a physical or onscreen keyboard, or the screen size, or whether the device will have Maps built in, or whether location functionality is available. All iPhone OS devices are guaranteed to have these capabilities.

  4. Fourth, and possibly Apple's secret weapon of strengthening the iPhone OS platform is the iPod touch (and to a lesser degree, the iPad). iPod touch is just an iPhone without a cellular contract, which makes it irresistible to many more people. And more people buy more apps. And more app sales attract more developers. And more developers make for a stronger platform. There is no Android equivalent to the iPod touch; there is no way to get the Android platform without purchasing a phone (which usually means a contract).

If I had to guess the reason of Android's current success in the United States, it would simply be that the devices are available on networks other than AT&T, which has been the chagrin of many american iPhone users over the past years. I have the feeling many Android users on Verizon are there for the network, and not the phone. Either way, I think it's important for Apple to remove abysmal network performance of AT&T as a reason for not buying an iPhone in the United States.

Jonathan Rentzsch Cancels C4 conference.. Sad Mac :(

With resistance to Section 3.3.1 so scattershot and meek, it’s become clear that I haven’t made the impact I wanted with C4. It’s also clear my interests and the Apple programming community’s interests are farther apart than I had hoped.

Facebook Users Outraged That Personal Information They Currently Share With Distant Acquaintances and Friends of Friends May Now Be Seen By Strangers. Mike Lacher:

“I think it’s creepy,” said one user on a message board, “Now when I do well in Farmville or post messages about how hammered I got last night, it won’t just be limited to my tight-knit group of classmates from high school I don’t talk to, people with the same last name as me, and Arizona State Communications majors. Anybody can see it.”

Five paintings stolen from Paris Museum. Astonishing art heist at the Paris Museum of Modern Art last night. Among the burgled pieces were works by Picasso and Matisse.

New York Times:

A lone hooded man who squirmed through a broken window and evaded security alarms stole five paintings by Picasso, Matisse and other artists overnight Wednesday from the Paris Museum of Modern Art, in a brash theft valued somewhere between $75 million and $125 million.

While the sale of these stolen pieces on the black market will be considerably difficult, it's interesting to consider this bit from the BBC's report:

There has not been anything comparable since the 1990 theft at the Isabella Stewart Gardner Museum in Boston of a Vermeer, several Rembrandts, Degas and other masterpieces, says BBC Arts Correspondent David Sillito.

None of these works has yet been recovered.

MobileMe should be free. Sachin Agarwal:

Is Apple missing the boat on cloud services? Yes! If you buy an iPod, iPad, or iPhone today, Apple still requires you to sync the device with iTunes before you can start using it. But why do I even need to have a laptop?

I blogged about this before. Your laptop should be just another client device with equal rights as your iPhone or iPad. All your devices should sync with the cloud. Mobile Me should be the hub for all your data, not your laptop.

I agree fully. This is possibly the one area where Android is currently trouncing iPhone OS (though do keep in mind, Android itself does not inherently mean Google's services; it just always happens to end up that way).

Mobile Me costs $99 per year. While that's a fair price for 20GB of email and storage, most people won't pay it if there are free alternatives. Even though GMail is filled with ads, people choose free. So Mobile Me will have a hard time growing and being successful on its own.

Yeah, unless Apple had some kind of advertising network….

How Richard Stallman browses the web. RMS:

For personal reasons, I do not browse the web from my computer. (I also have not net connection much of the time.) To look at page I send mail to a demon which runs wget and mails the page back to me. It is very efficient use of my time, but it is slow in real time.

I know this isn't news to a lot of people, but I thought it worth reposting.

How Pixar Built Toy Story 3. Insightful coverstory at Wired about Pixar's forthcoming blockbuster (you know it's going to be a blockbuster).

Since 1995, when the first Toy Story was released, Pixar has made nine films, and every one has been a smashing success.

And they're not just commercially successful, they're also really, really good.

Pixar's best asset, far above and beyond their technical prowess and innovations (which are astounding by themselves) are the people. Pixar invests in its people and it shows in their work.

The upper echelons also subject themselves to megadoses of healthy criticism. Every few months, the director of each Pixar film meets with the brain trust, a group of senior creative staff. The purpose of the meeting is to offer comments on the work in progress, and that can lead to some major revisions. “It’s important that nobody gets mad at you for screwing up,” says Lee Unkrich, director of Toy Story 3. “We know screwups are an essential part of making something good. That’s why our goal is to screw up as fast as possible.”

And of course, Steve Jobs has to share at least some of the genius:

Good thing Steve Jobs insisted that the building’s essential facilities be centrally located. “Walking to the bathroom or getting a cup of coffee is often the most productive part of my day,” says producer Darla Anderson. “You bump into somebody by accident and then have a conversation that leads to a fix.”

Working without interruptions. Neal Stephenson on writing uninterrupted (via Merlin Mann):

Writing novels is hard, and requires vast, unbroken slabs of time. Four quiet hours is a resource that I can put to good use. Two slabs of time, each two hours long, might add up to the same four hours, but are not nearly as productive as an unbroken four. If I know that I am going to be interrupted, I can’t concentrate, and if I suspect that I might be interrupted, I can’t do anything at all. Likewise, several consecutive days with four-hour time-slabs in them give me a stretch of time in which I can write a decent book chapter, but the same number of hours spread out across a few weeks, with interruptions in between them, are nearly useless.

I need to go on this diet. My interruptions don't stem from emails or other messages, my vice is distraction without interaction.

People don't multitask. Peter Bregman on humans who multitask:

Doing several things at once is a trick we play on ourselves, thinking we're getting more done. In reality, our productivity goes down by as much as 40%. We don't actually multitask. We switch-task, rapidly shifting from one thing to another, interrupting ourselves unproductively, and losing time in the process.

I wish people would stop deluding themselves otherwise.

BP has run out of ideas to stop the Gulf Oil leak. But they're dying to hear what Youtube commenters have to suggest!

You can submit your ideas on the best way to stop and clean up the oil spill via Google Moderator by 2:00 p.m. PT on Thursday, May 27.

AJ Jacobs goes cold-turkey Unitasking for 30 days. Jacobs:

The stereo is silent. The TV black. The room dark. I am focused on nothing but a glowing computer screen. I'm doing this because I have a problem focusing. My brain is all over the place. Unless I'm doing at least two things at once, I feel like I'm wasting my time. Phone and email. Watching TV, checking Facebook and reading the news online. Texting and peeing.

Since multitasking is so bad for us, I propose we call unitasking “proper-tasking”.

(via kottke)

Ars Technica on the incessant meddling of the Star Wars masters. Ben Kuchera of Ars on the importance of the original masters of the Star Wars trilogy:

The story of Star Wars is the story of film, and of how we keep our past to share with the future. George Lucas does have the legal right to change and adjust his own work any way he'd like, but Star Wars existed in a very specific way for its original theatrical run. Those memories, and those scenes, have a very real value and meaning to fans. This isn't just a science fiction film anymore—it's an important piece of culture.

Marco Arment launches Preview.FM. Marco Arment on the launch of Preview.FM:

As a result, whenever I discover a new band and browse their albums to decide which to buy, the storefront interfaces often work against me, making it difficult to quickly find a band’s albums and navigate between a bunch of them for preview and comparison.

So I made this.

Introducing the SkypeKit SDK beta program. Jonathan Christensen:

SkypeKit will initially be available as a beta on an invitation only basis. SkypeKit for consumer electronic device makers will be available tomorrow, June 23, based on the Linux OS. For desktop software developers, SkypeKit will be available for Windows and Mac in the next few weeks.

Launching on Linux?

PCWorld: "Apple Blew It". Jared Newman describes what a mess it is:

iOS4's multitasking is a mess of a feature. Yes, it lets you listen to Pandora while using other apps. Yes, it lets you freeze games that support multitasking, such as Plants vs. Zombies, while you take care of more important tasks. But in exchange for those perks, some of the iPhone's elegance is lost, and the advantages you'd gain from true multitasking aren't there either.


This is how multitasking on iOS 4 works: Apple provides a few services which developers may use in their applications for playing audio, doing VoIP calls, and “completing tasks” (such as downloading or uploading data) running under 10 minutes in length. There are others, but those are the three primary services.

This is not the same as running five apps at the same time like one might be accustomed to on a desktop computer. Apple does not call this “Running simultaneous applications” for a reason. They call it multi-*tasking*. Playing audio in the background or uploading an image to Flickr is a task, not an application.

The reason for this is of course to allow the user to do a few tasks while not killing the battery. From what I have read, this is precisely what Apple has achieved. It should also be noted Android handles multitasking in a similar way (although I am not as familiar with how it exactly works).

The most painful part of this piece [of …] is this gem concerning task management:

Every time you open an app, it gets added to the tray, and the only way to close it is by pressing and holding any app icon, then clicking the top-left corner of the apps you want to close. If you don't micromanage, the tray quickly becomes overrun with clutter, making it hard to find the apps you really need.

Except the operating system actually quits these “running” applications (those appearing in your tray) when it gets low on resources. And it starts by killing the least-recently-used application. The system is designed for the user to be completely ignorant about having to “quit” applications themselves. That the user can manually quit an application is just a nicety thrown in for those who want to do it themselves (probably for the OCD crowd).

Multitasking on iOS has its compromises, but to say “Apple Blew it” is a bit outrageous.

Multitasking the Android way. Dianne Hackborn of the Android team (via roo):

For these tasks, the application needs a way to tell Android “I would explicitly like to run at this point.” There are two main facilities available to applications for this, represented by two kinds of components they can publish in their manifest: broadcast receivers and services.

Similar to how iOS 4 multitasking works, but more complex. The applications on Android can make more aggressive use of system resources (such as memory and CPU time).

The Writer Who Couldn't Read. Fascinating textual and visual tale of Canadian novelist Howard Engel reworking his reading and writing abilities after having suffered a stroke in 2001.

Briefly put, Engel discovered that if he traced the printed gibberish on a page with his hand, if he simulated the movements that a writer makes as he writes, he could gradually get back the meaning of the words.

What's So Funny 'Bout Peace, Love, and Intellectual Property Theft?. An interesting piece on DesignWorkLife today about inspiration, intellectual property, and ripoffs:

Blatantly stealing some one else’s work is always wrong. Being inspired by those you admire is not, and I think this is where the line gets blurry.

I'm not really an artist but I make software and I put much effort into my designs. Whether your craft is pen or brush or key, creativity is shared in its cruciality to success. Like every craftsperson, I have seen further by standing on the shoulders of giants. That is, our crafts are the result of generations of iterations, evolutions, inspirations.

Being ripped off is not fun, but in the end ideas are ideas. Whether somebody uses their own creativity or steals yours, the ideas still exist. Your creativity, your art, designs and ideas, will all outlive you. As a creator, you have to do the initial heavy lifting to conceive them, but then what?

Whether another artist is inspired by you or rips your work off, what damage is really done? Aside from maybe a bruised ego, your ideas are percolating in even more brains. You may not receive proper attribution, but creation isn't about attribution. It's about creation.

Google ends sale of Nexus One to end-users. Google:

This week we received our last shipment of Nexus One phones. Once we sell these devices, the Nexus One will no longer be available online from Google.


Kindle books now out-selling hardcover books at Amazon. Claire Cain Miller of the New York Times:

Amazon.com, one of the nation’s largest booksellers, announced Monday that for the last three months, sales of books for its e-reader, the Kindle, outnumbered sales of hardcover books.

In that time, Amazon said, it sold 143 Kindle books for every 100 hardcover books, including hardcovers for which there is no Kindle edition.

Here's to a fewer dead trees every year.

iPhone 4 Rogers launch details

A little birdie from Rogers has just chirped some iPhone 4 in Canada launch information:

  • Expect information early next week from all three Canadian carriers.

  • Expect rates and upgrade paths to be very competitive as this is the first iPhone launch where Rogers is competing with Bell and Telus.

  • Purchasing the iPhone 4 from the Apple Store will be the unlocked (and therefor much pricier) model.

  • There will be no pre-order for iPhone 4 from Rogers, you will have to line up.

  • No denial or confirmation of a July 30 launch date.

And wouldn't you know it, I completely forgot to ask about holding the phone wrong!

Wave Goodbye. Google cancels another crappy product.

We don’t plan to continue developing Wave as a standalone product, but we will maintain the site at least through the end of the year and extend the technology for use in other Google projects.

The BlackBerry Tablet: PlayBook. Notice how the video doesn't show a single person interacting with the device directly (nobody is touching it).

No mention of battery life which leads me to believe it's pitiful.

Having said that, the hardware looks nice, as does the user interface.

The United States Continues Being Totally Fucked. Malcom Gay:

Mr. Ringenberg, a technology consultant, is one of the state’s nearly 300,000 handgun permit holders who have recently seen their rights greatly expanded by a new law — one of the nation’s first — that allows them to carry loaded firearms into bars and restaurants that serve alcohol.

Jason Brennan's Interpretation of Model-View-Controller. From a StackOverflow answer:

The way I look at the MVC paradigm (and remember, it's just a paradigm — it's open to interpretation) is the Model knows nothing about the view and the view knows nothing about the model. They should not directly interact; your view should not have an instance of your model, strictly speaking.

The Washington Post Gets It. Video for the new Washington Post iPad app. It's great to see a newspaper embracing the digital age. Finally.

Sony Creates new GNUStep-based Application Framework for Consumer Devices. Sony's creating a new framework for apps, based mostly on the GNUstep/OpenStep APIs, which share the same lineage as iOS.

The foundation upon which this project is base [sic] comes from the GNUstep community, whose origin dates back to the OpenStep standard developed by NeXT Computer Inc (now Apple Computer Inc.). While Apple has continued to update their specification in the form of Cocoa and Mac OS X, the GNUstep branch of the tree has diverged considerably.

Presumably, also implemented in Objective-C. This definitely seems like a grab at iOS developers, but it's a fantastic idea.

Apple set to launch subscription model for Apps in the App Store. You can watch it live, too.

Watch at 11 a.m. Eastern Time today as News Corporation unveils The Daily, featuring special guest Eddy Cue, vice president of Internet Services from Apple.

Update: Ironically, this isn't working for me in Safari. Fucking Flash.

Apple set to launch subscription model for the iTunes store. You can watch it live, too.

Watch at 11 a.m. Eastern Time today as News Corporation unveils The Daily, featuring special guest Eddy Cue, vice president of Internet Services from Apple.

Update: Ironically, this isn't working for me in Safari. Fucking Flash.

Egypt's Opium of the Internet. The eloquent James Shelley:

It turns out that the vast majority of history’s successful protests occurred before the dawn of the Internet. Protesters and revolutionaries have brought down governments for many centuries… long before Twitter was there to help them out.

Perhaps a more pertinent question is this: in what ways does the Internet subdue and quell revolutions?

About the Redesign

Welcome to the redesigned Speed Of Light. It's taken me a while to get this far, but I'm pretty happy with the result.

The biggest change is stylistic. The site looks much better now than it ever has: better contrast and nicer typography. Typically I steer away from serif fonts for computer screens, but as new devices have been rolled out in the last year, it's becoming clear screen pixel-density is increasing, which really makes the type look gorgeous. Naturally, laptop and desktop displays haven't seen such dramatic leaps, but my guess a good portion of my readership is reading on a mobile device, anyway. If not, you should.

Having said that, it's probably only going to look great if you're using an Apple OS (that is, Mac OS X or iOS). Typography on Android unfathomably shitty. It's almost worse than on Windows. If the site looks bad on your Android device, I'd recommend instead subscribing to the site in a feed-reader and bypassing the homepage entirely. You won't miss much that way. Much.

Stylistically, the site should scale better as font-sizes or window sizes change, although it's not yet bullet-proof. I've also not implemented proper mobile style tweaks, although at least on iPhone 4 and iPad I'd say it works pretty well as is.

Other changes include improved markup for the actual pages themselves. The HTML5 (actually, it's just HTML now) now uses tags such as <section>, <header>, <article>, <nav>, and <footer> to produce more semantically rich pages. This should work in all browsers, whether or not they support the new HTML features.

I've moved the site navigation to the bottom of the page (no more sidebar). My reasoning is most people probably don't use the navigation very much, so there's no reason cluttering up the top of the page. Now there's just the mast and article content.

A few things are still missing: there are no more shortcut keys for page navigation (did anyone use these?) and the hyphenation library is gone as well (this one made the site look really nice, so I'll likely incorporate that again).


Firstly, the site went down sometime in December 2010. I say sometime because I honestly don't know when (it was pointed out to me on Boxing Day). That tells you how often I'd been using the site even myself, not to mention how long it'd been since I'd last updated.

The site is hosted by Dreamhost and they changed something on my server which caused the site to break (deploying this site to Dreamhost was more like a nightmare), so I had to fix that as well. The fix ended up being mostly painless (though some of my other hosted sites are still down).

I knew it was time for a redesign. And more importantly, I knew it was time to get writing more. I've had this idea bouncing around in my head for a month or so now, and I'm not ready to share it, but it involves writing more. This website is the first step to accomplishing my goal.


“Colophon” is the name of the engine running this website. It's a Ruby application written with the excellent Sinatra framework. It was written by me in early 2010 as a way to learn Ruby better.

Posts are written as plain-text files on my computer, formatted in Markdown (so maybe not plain-text), and have an accompanying metadata file formatted as json. All articles are stored this way, and are in source control with git. When I hit “Publish” in my Editor app, all files are added to git, then pushed to my server over ssh. On my server, I have a post-commit hook set up which runs another Ruby script on the server. The script scans through the newly pushed articles, reads them one by one and converts the markdown to HTML, and shoves them in an SQLite database. When you request a page from the server, it's the database which ultimately gets read.

This is convoluted, but it gives me great flexibility. More importantly, it solves two major problems I have: I'm very unskilled with properly managing databases; I'm very unskilled with web-security.

Firstly, inserting and deleting rows in a database are dangerous for me. If a DB interface were the sole way of publishing articles, I can promise I would have already lost all my data at least once or twice by now. Databases are pretty opaque as far as datatypes go. Conversely, I could edit my articles with Notepad if I wanted to.

What you also may not realize is the database has been hosed a number of times in the last year by my prodding, yet all the articles are still available. I've deleted the database at least ten times, and I've still been able to properly re-migrate all article content back in (the script takes less than a minute to move all articles).

With articles being in git, I can make changes, revert, and even merge changes on other machines if I want to (and I do). I even have a “drafts” branch which does not get published until I merge into my master branch.

Secondly, security. I don't know what I'm doing with web-security so the safest route for me was ssh (which git uses). I don't have to worry about https or certificates or passwords. It would be very difficult for someone to break into my site because I haven't sloppily coded the security (had it been coded by me, it would have been sloppy and bug-ridden). Leave the security to the experts.

The rest of Colophon is very basic. There is an ATOM feed (similar to RSS, only I find it has better support for embedding HTML in feed items, and it was simpler to build in code). I'm also looking into integrating a json feed of the articles (because XML is essentially digital-Satan).

The Things I Love

As part of a continuing effort towards a lifestyle of owning exactly what I need in my life and nothing more, I've decided it would be wise to list and describe the things in my life which I hold dear to my heart. The following is not to be a display of objectophilia, but a reinforcement of my justifications for having what I have, and also therefore not wanting what is “more than enough”.

  • Two floor-lamps and a swinging-arm desk lamp. Most rooms in my apartment have insufficient or improper lighting, so these three lamps provide ample illumination. At any given time in the evening, my home is lit by no more than two of these three lamps: both floor lamps if I am reading in my room, or a floor lamp and my desk lamp if I am working in my office (one acts more or less as a beacon). The rest of my apartment remains shrouded in darkness if I am not visiting a particular room, other than my office or my room.

  • One thirteen inch frying pan purchased from Ikea in January 2008 as part of a $100 gift-card given to me by a stranger who had heard I was moving into an empty apartment (thanks John). This frying pan has cooked me breakfasts and suppers for over three years without fail. The handle is sturdy and rigid and comfortable. The pan is smooth and stick-free and cleans easily under the tap. The underside has the familiar age-marks of a years-old pan—burns and water stains— and the side is dented from an April 2010 tumble to my kitchen floor. But the more it ages and the more I stress it, the more it becomes mine and the less I desire to replace it.

  • One thirty inch Apple Cinema Display purchased July 2009, refurbished for $1599. Nothing about minimalism says things ought to be small. I had lusted after this display for years, and in the summer of 2009, my 20 inch LG LCD up and died. As someone who spends large portions of his waking life at a computer, I decided this was the display for me. Immediately, I felt more comfortable writing, coding, and reading documentation. It's the anxious feeling of five tasks precariously stacked upon one another melting away as all five are now sitting in a row, each safely resting on the floor. Before the purchase, I had some worries about buying a refurbished unit, but in the nearly two years I've been using this display, I have yet to find a single flaw. The stand is sturdy, but the display tilts without a squeak. The screen is beautiful and bright (half-brightness is my usual setting and it's perfectly sufficient almost always). The buttons are responsive and easy to find (they are the same touch buttons as found on the 3rd Generation iPods).

  • One pair of Paper Denim & Cloth jeans, sized 31x32, purchased November 2007 for $197. These were the first pair of jeans to ever truly fit me perfectly. I have not been able to ever find a similar pair since. I still wear them often to this day. While they were originally a deep dark blue (so blue, in fact, they used to dye the tops of my socks), they have since slowly faded through the washes. The left leg has a distinct rectangular fade on the thigh due to my phone always being held in my left front pocket. The jeans, though old, remain well-sewn, devoid of tatter or fray.

  • One pair of Rayban eyeglass spectacles with plastic lenses (not Wayfarer), purchased September 2009 for $199. For a long time I had been a contact-lens wearer, the type which were left in for thirty days-and-nights at a time. After a few years, my eyes became more irritable to the lenses no matter the duration of wear, and I decided it was time to stop wearing them and move to something easier on my eyes. Aside from looking nice (so I'm told; I'm myopic), these eyeglasses help me see very well, they are very light, stay on my face, have no glare, and generally do their job precisely as I'd like them to.

  • One Rev A MacBook Pro Core Duo 2 GHz purchased August 2006 for $2149 and one Rev A Mac Pro dual dual-core Xeon 3 GHz purchased August 2006 for $3999. The MacBook Pro has been my portable development machine since it was purchased. It has dutifully run Mac OS X Tiger, Leopard, and Snow Leopard. It has at times been my sole machine for month-long periods (May-September 2007 & January-April 2010). It has two scuffs on its shell from the first month I had it (without a proper carrying bag) and one dent on the front. These identify it as mine. It has an 80 GB harddisk which at any given time has 30 GB free. I keep almost no media on it. It shows its age in its slowness (but I'm convinced an extra gigabyte of memory and a solid state disk would revive it handsomely). Slow though it may feel, it works very well and performs its task of coding-on-the-go well. The Mac Pro has been upgraded with 9 GB of memory and two additional harddisks (with space for a fourth). I do the majority of my work with it. Every day it feels like a brand new machine. It feels as fast today as it did August 2006. It turns on instantly, it works silently (the noisiest parts being the disks—when they're not asleep), and it processes relentlessly.

  • One iPhone 4 purchased July 2010 for $165, replacing iPhone 3G (given to my sister), which replaced an original iPhone (sold in 2008). iPhone 3Gs was a lovely phone but at the time, my iPhone 3G was fully supported and still ran the newest software, and still performed its job dutifully. When iPhone 4 arrived, I decided it was a worthy upgrade for me. It performs its job as a mobile computer leaps and bounds ahead of my previous device. The networking is fast, the CPU is fast. I never find myself waiting for the phone to do anything. The screen is brilliant. Seven months later and I've not become used to it. It is an astonishing treat for the eyes. I have never beheld a device with such splendour. It feels so right sitting in my hand, like a machined gift from the future and all the same familiar. It's halfway between the Monolith and the handle of my frying pan as it rests in my hand while cooking a breakfast for two.

Life in 2011 is a constant barrage of objects, of reasons to drop old ones in favour of the new. I love the things I love not because they are irreplaceable (I would not be carrying lamps out of my burning home) but because they currently do exactly their respective jobs and do not need to be replaced.

Firefox 4, 5, 6 and 7 To Ship By End of 2011. It looks like Firefox is playing catch-up with Chrome. From the “tl;dr”:

  1. Ship Firefox 4, 5, 6 and 7 in the 2011 calendar year
  2. Always respond to a user action within 50 ms
  3. Never lose user data or state
  4. Build Web Apps, Identity and Social into the Open Web Platform
  5. Support new operating systems and hardware
  6. Polish the user experience for common interaction tasks
  7. Plan and architect for a future of a common platform on which the desktop and mobile products will be built and run Web Apps

Android 2.3 Gingerbread Has a User Data Exposing Exploit. When the researcher reported it to Google, they got back to him in ten minutes:

I notified the Google Android Security Team on 01/26/2011 and was pleased/impressed to receive their response within 10 minutes. After that, we exchanged emails, including a critical piece of exploit code, to better understand the nature of the vulnerability. […] The vulnerability is now confirmed and I was told that an ultimate fix will be included no later than the next major release of Android.

The Daily Staleness. I've been trying to put words to the general feeling of “meh” I have towards The Daily, and Ben Brooks nails it:

Do you the reader and consumer of news want information that is a day old? Would you prefer an hour old, perhaps a minute old, seconds, what about real-time? If you take an honest poll of news readers that are using iPads and you will find that for most the sweet spot is a few hours old.

I think there is something else, and this staleness is just symptomatic of a bigger problem. The main problem of The Daily as I see it, is a pathetic clinging to an old and dying medium of print publishing.

In the print world, Advertising is key. Everything revolves around the advertiser. Content is created to lure more eyes to sell more advertising. The journalists and writers busting their necks day after day are merely cogs in the Great Print Advertising machine, and they falsely “believe” (on some level at least; I'm sure they're aware of how it really goes) their content is what's being bought. Wrong. Readers and their attention is what's being bought.

The Daily focuses on this and this alone: Let's put junk together once per day because that's what we've always done in print. If you watched the live Question and Answer portion of The Daily launch event, then you would have heard Jon Miller say (I'm paraphrasing as I don't have a proper recording. Corrections are welcome):

In case I didn't make it clear during the presentation, we love advertisers.

The unfortunate thing is this model pays the print execs big bucks, and the reporters get next to nothing. We're in a new world of internet media, where every player has equal foothold. I hope soon we'll start seeing the “One-Person Newspaper”, a whole new model to turn the Print world on its head.

YouTube5 Safari Extension. This isn't new but I thought I'd offer a PSA about the excellent YouTube5 extension for Safari 5 users.

It lets you watch the H264 versions of YouTube, Vimeo, and FaceBook videos instead of their Flash counterparts. It also has a nice custom video player which mostly stays out of your way.

If you loathe Flash, you owe it to yourself to install this extension.

Shawn Blanc on Writing vs "Writing". Shawn's thoughts on the difference between being a Writer, and just doing actual writing:

I may never be a capital “W” Writer. I may never win a Pulitzer, or write for the New Yorker, or even get pen to ink for what could be the next great American Novel. But I want to shoot for it. I want to be the best. I want my writing to be engaging, clever, and quotable. I want my articles to be insightful and memorable. But that will never happen if I only ever allow myself to write when it feels like Writing.

Lately, I've been trying to find motivation to write more often. Shawn's piece really accentuates the problem I've been having. I don't consider my writing to be special, but I'd like it to be, and it seems the only way to achieve it is to pay my dues by writing every chance I get.

Expect a few more interesting links I've seen since Shawn posted his yesterday. Hopefully we can all find a little bit of collective inspiration. If you've been putting off writing, now's a great time to pick it up again.

Patrick Rhone on Not Writing. Patrick Rhone, curator of the excellent Minimal Mac encourages writing even when you don't know what to say:

It’s OK not to have anything to write about. But if you want to call yourself a writer, you kind of have to write. So, even if you have no idea what to put on that page, just sit down and write the first things that come to mind.

The Myth of the Perfect Writing Environment. No excuses.

If you’re passionate and dedicated and intend to get your writing done, buck up and do it. If you don’t have your perfect Moleskine with you or you left your lucky pen at home, then write on something else. Use a napkin. Use a crayon. Write. Get it done. Put your words into something so that you can look at them outside of your head. Get the first thoughts out. Get your notes into a format that will generate a real piece when the time is right.

Why I Write

There are a few reasons I've been writing more lately, each with varying degrees of influence. It begins with two stories:

When I was in high school, I noticed I was quite proficient in the essay-writing courses (except for one English course, where the teacher wrote, triple-underlined in red ink: “Out of which WINDOW did you THROW OUT EVERYTHING YOU KNEW ABOUT GRAMMAR??”), and I scored a 99% in my grade 12 Writing course. Though everyone did quite well in the Writing course, it was the first time I'd ever really aced a subject in school and it really solidified my belief I had talent in writing.

My performance in non-humanties classes has always been less than stellar, but courses with extensive writing have always been my best grades. It's not that I'm particularly great at writing, and it's not effortless, but it's something I find enjoyable, and I like to believe I'm good at it. I find myself never practicing, and most school essays have been written hours before they were to be handed in.

I'd found myself wanting, for a long time, to rectify this situation, to practice writing on my own and to publish it as well. In recent years, I've tried various approaches to web-hosted writing, but none of them became habitual. I found the software available frustrating to use, and I'd use that as an excuse to not write. It would take a close friend of mine to unknowingly inspire me to fix this.

My good friend Ash Furrow had a blog of his own, where he often wrote about Science, Atheism, Student Life, among his many other passions. While I didn't always agree with every word he wrote, I always admired his fervour and tenacity to write well and write often. I distinctly recall one cold January 2010 evening, after reading yet another article from Ash and having the desire to “Do that” surge through me. That night I designed and began building what would be known as “Colophon”, the software which runs this website.

I finished and published Colophon as this website in the Spring of 2010 and published articles throughout the Summer, but by Fall, I had lost my steam again. Writing became less of a priority for me and eventually I stopped altogether. It even had to be brought to my attention when the website broke!

As I mentioned in the Redesign article, I want to be writing again. I have a few possible directions in my head for what this might entail in the coming year, but for now suffice it to say I want to keep this website brimming with fresh content. Like all habits, these things take time and persistence, but in the end I think it will be worth it.

WSJ Rumours "iPhone mini". From the Journal:

Apple Inc. is working on the first of a new line of less-expensive iPhones and an overhaul of software services for the devices, people familiar with the matter said, moving to accelerate sales of its smartphones amid growing competition.

The entry-level iPhone is $200 USD/CAD, which isn't terribly expensive. The high price of appphones stems from the monthly service fees. So this difference in purchase price means little in the long run.

I'm not so convinced Apple will release some form of “iPhone mini” because of this.

Beautiful New Typeface "GT Walsheim" from Grilli Type. And it's On Sale today only:

To celebrate the release of the full eight weight family on Valentine’s Day, we offer the full GT Walsheim Family for $250. Forget your loved ones and buy GT Walsheim! This offer only runs until the end of Valentine’s Day (CET).

Apple Launches Subscriptions on the App Store. Apple:

Publishers set the price and length of subscription (weekly, monthly, bi-monthly, quarterly, bi-yearly or yearly). Then with one-click, customers pick the length of subscription and are automatically charged based on their chosen length of commitment (weekly, monthly, etc.).[…] Apple processes all payments, keeping the same 30 percent share that it does today for other In-App Purchases.

The 30% cut for subscriptions is causing a bit of a stink around the web this morning. If the user subscribes from the app, Apple gets 30%. If the subscription came from outside the app, you get 100%, Apple takes nothing. But, your prices in and out of the app have to match. And you can't have a link in your app for users to subscribe elsewhere.

We'll see how this affects services like Netflix.

Why Apple's In-App Subscriptions Are Great

There's been a bit of a commotion surrounding Apple's announcement this morning of In-App Subscriptions for “content-based apps” (a term which has yet to be given a formal definition; does Netflix fall under this designation?). The consensus on Hacker News seems to be “What gives Apple the right??”. And from the perspective of a large content-producing company, this is bound to taste a little sour.

Taking a moment to project the possible outcomes of this announcement into the future, however, it seems like this new subscription model could usher in a brand new era of users, once again, paying for the content they consume.

Newspapers and other similar content providers have struggled so far to monetize their content in a subscriber-based fashion in the last decade of the web's dominance. Consumers have not been inclined to pay for content they consume on the web. There has always been a somewhat large barrier for consumers to pay for the content, as it's quite a hassle to sign up and pay for the content (credit cards, billing, etc). But for the majority of iOS users, their primary payment method is already configured and ready to go. They've proven this ten billion times over.

This of course doesn't mean because consumers can purchase content that they will, but it certainly doesn't hurt, either. What needs to catch up now is the quality of the content itself. From what I've seen of “The Daily” so far, the content just isn't that great.

Enter the Indie

This may not turn out to be the panacea Big Media is hoping for to solve their monetization needs, but there's a good chance this can spur lots of independent content producers to keep creating great content, and also make a buck doing so. I've already spoken about this before, and I think it's worth reiterating: if you're an independent writer or journalist of some kind, you'll make peanuts working for Big Media. The new In-App Subscriptions will allow independent content producers to make derive their own money from their own content, on their own terms.

Again, Apple hasn't solved the problem of the consumer who doesn't purchase, but they're enabling those willing customers to subscribe simply and easily. Only time will tell if our content can convince the consumers to do so.

Foreign Hackers Attack Canadian Government. CBC:

An unprecedented cyberattack on the Canadian government also targeted Defence Research and Development Canada, making it the third key department compromised by hackers, CBC News has learned.

What I'd like to know, and this is a legitimate question, is what sort of operating systems our defence machines run. And if it's off-the-shelf, then why not build a better one in-house. Aside from “security through obscurity”, it could be built with the government's security needs in mind, instead of as an afterthought.

IBM Centennial Film: They Were There. A beautiful 30 minute look back at IBM's centennial, directed by Errol Morris. I meant to link to this about a month ago, but if you're an IBMer and you haven't seen it, you're in for a real treat.

Cleaning & Destressing. My friend Ash Furrow on being stressed:

I'm working at home and I'm dealing with stress. Something you should know about me: I clean when I'm stressed and/or trying to procrastinate. Something you probably already know about me: I drink a lot of coffee. I find it really hard to concentrate in a messy environment. I can't cook in a dirty kitchen, meaning if I had my way, I'd clean before and after I make each meal.

I'm exactly the same way. Except coffee is the devil's beverage.

Ash Furrow On Style. From the Furrow'd Brow himself:

I’ve been doing research into men’s style. I’m not talking trends or “what’s in” (that’s why I called it “style” and not “fashion”). I’m talking about basics: how to properly iron a shirt, take care of leather, the history of pocket square (it was the Greeks!), and so on.

Interesting thoughts on clothes vs. man.

I also wonder if Ash reads/watches Put This On.

Cappuccino 0.9. Cappuccino is an excellent web framework, written in Objective-J (Objective-J is to Javascript as Objective-C is to C, and the two Objective syntaxes are nearly identical).

We’re really excited to announce the next major release of Cappuccino, Version 0.9. This massive release includes several killer new components, exciting new features for existing components, and of course a number of bug fixes. Here’s a brief overview of some of the compelling things you’ll find in Cappuccino 0.9

He Loved Big Brother. I've been talking with some people about the world we live in and I thought this essay might be relevant to the conversation:

After an examination of what an Orwellian society is comprised of, I will show ways in which our governments mimic those of the Party of Nineteen Eighty-Four and how through certain scare tactics, we have come to accept these new “provisions” for our own safety.

I wrote this for a class in 2009. I'm also trying to get in the habit of publishing more work.

More Wonderful News About Lion. AppleInsider:

Much like multitasking on Apple's iOS, Mac OS X may terminate an application behind the scenes when it goes unused or has no open windows. The application usually relaunches instantly when the user accesses it again.

Sebastiaan de With On Notifications. webOS notifications:

To illustrate that, let’s grab a smartphone. Anything the size between a tiny Palm Pre or a huge HTC EVO 4G will do. Hold it in your left or right hand, and tap the bottom of the screen with your thumb. No effort, right? Now tap the top of the screen. See how much more effort that takes?

Same general idea as Android, but easier on your tendons.

Desktop Workstations. Ash Furrow:

However, I’ve seen other geek’s workstations, and they’re always impressive. Multiple monitors, keyboard, backlit everything, wireless everything, and ne’er a cat’s hairball to be found. These guys seem more like posers to me, based on what I’ve seen from my friends’ home workstations.

Here's mine. I did clean before the photo, but generally only keep a few loose notes and a plate or extra glass on the desk when I'm between meals. But everything aside from my keyboard (which is now wireless) and mouse belong parallel to my display, so I have lots of hand room on either side.

Move Fast And Break Things. Ash Furrow:

Breaking things, and then fixing them, is a fundamental exercise in a Computer Science degree. If you can’t fix it, then you don’t understand it, so you’re not getting your tuition’s worth. Students study at university not to get a nice-looking transcript; they come here to learn.

Moral of the story? Let Cody test it.

"Enough 14: Don't Worry. Do.". Episode 14 of the Minimal Mac podcast, featuring a tearjerking story of bi-polar disorder from Patrick Rhone.

Patrick writes and curates Minimal Mac, a website devoted to getting the most out of minimalism, be it on a Mac computer or life in general. His words are full of wisdom. He is a modern sage.

It's hard to feel anything but incredible respect for Patrick, more so now than ever.

Overview of Changes in iOS 4.3. The most important and long-overdue change:

On all devices, if you’ve already bought or downloaded an app but it’s not currently loaded on your iPhone, iPod touch, or iPad, instead of the Free or price button, you get an Install button. (This doesn’t seem true in all regional App Store yet, but is working in the US App Store.)

Previously it would just list the price of the App, and it wasn't until you pressed “Buy” and entered your password that it would tell you “You've already purchased this app. Press OK to download for Free.”. An alarming experience, at best. Glad to see it finally fixed.

The Inhumane Interface

We have two keys on our keyboards designated for deleting text: the backspace key and the delete key (these may even be the same physical key, as they are on my keyboard). That is, of course, unless you are in the “Selected text” mode, in which case any key on your keyboard will delete your selected swathes of text.

This is incredibly poor user interface, and unfortunately shared by every operating system I have ever used, including the brand new iOS. I am curious as to who designated this as a good idea. It's been a while since I've read Jef Raskin's book (Raskin was the creator of most of the original Macintosh user interface, which every modern operating system takes after, to this day), but such modal and destructive behaviour seems unlike the rest of his teachings (I could be mistaken).

I believe this to be poor user experience because Text Selection is marking a range of text for manipulation (like copy, move, style changes, etc.). Of course, you might also like to erase the text, in that case you can use the Delete/Backspace key. But to have any key also function as 'Replace all this text', when you're solely in a text manipulation mode just seems like a destructive action, where it wasn't explicitly commanded by the user (it might be explicit if you've experienced the behaviour before, but consider a novice user). I can't think of a good way to rationalize this to a novice user.

The only argument in favour of this behaviour I've ever heard is “People are used to it already, it would be difficult to change”. I've heard this argument applied to many poor-but-existing interface paradigms and I have to say it's completely bunk. The fact is the number of users accustomed to this paradigm could be shrinking every day (they die), while untrained users are arriving every day (they're born). We shouldn't punish new users just because of what we're used to.

Vietnam, 35 Years Later on The Big Picture. While on the subject of The Big Picture, here's a set of breath-taking and sometimes horrifying photographs from the Vietnam War.

(If you're doing the math, it was 35 years ago last year. This post is a year old).

More on "V. Lowe". More information about Valerie Lowe, the woman I singled out in my aforementioned link of Australian mugshots from the 1920s:

Valerie Lowe and Joseph Messenger were arrested in 1921 for breaking into an army warehouse and stealing boots and overcoats to the value of 29 pounds 3 shillings. The following year, when this photograph were taken, they were charged with breaking and entering a dwelling. Those charges were eventually dropped but they were arrested again later that year for stealing a saddle and bridle from Rosebery Racecourse. In 1923 Lowe was convicted of breaking into a house at Enfield and stealing money and jewellery to the value of 40 pounds.

The photo was taken February 15, 1922. Amazing.

Caterina Fake on the Fear Of Missing Out. Caterina Fake:

Social media has made us even more aware of the things we are missing out on. You’re home alone, but watching your friends status updates tell of a great party happening somewhere. You are aware of more parties than ever before. And, like gym memberships, adding Bergman movies to your Netflix queue and piling up unread copies of the New Yorker, watching these feeds gives you a sense that you’re participating, not missing out, even when you are.

I won't spoil the last paragraph but it's well worth the read.

Creating a git repository for an existing Xcode project. Ash:

New Xcode projects will let you keep local git source control. Using an existing project with this local source control is … difficult. I tried for a while before almost giving up. Documentation around the “Add Working Copy” is almost non-existent. This is how you do it.


$cd path/to/project
$git init
$git add .
$git commit -m "Initial commit of project"
Keep in mind with either approach, it's a good idea to include a good .gitignore file. Here's a good one for Xcode projects.

How Does UIView -removeFromSuperview work and What Does It Mean for Objective-C's Inheritence Model?

While working on a project today I encountered an interesting problem: How does UIView implement its -removeFromSuperview method? I'd like to detail the steps I took to figuring out the answer, and also show what I've learned because of it.

The Problem

UIView is the main drawing and container class for iOS. It maintains a tree of subviews, thus creating a hierarchy of rectangles on-screen which we like to call “Views”. Each view on screen has a parent view (called its superview) and 0 or more child views inside it (called subviews) (technically, a subview may appear to be visually outside its container, but it's still considered to be inside). As you can imagine, every screen you see on an iOS device has one big containing root view, which in turn has subviews inside it, each of which have subviews inside of them, and so on. This is the iOS view hierarchy.

UIView has two main methods to manipulate its place in the hierarchy: -addSubview:, which is used by a parent superview (“parent superview” is redundant, I'm aware, but it makes it simpler to explain) to add a child; and -removeFromSuperview, which is called on a child in order to remove it from its superview.

It's very easy to imagine an implementation of -addSubview:. Here's a simplified version of how it might look:

- (void)addSubview:(UIView *)subview {
    [subview removeFromSuperview];
    [subview setSuperview:self];
    [_subviews addObject:subview];
It first removes the subview from its existing superview, then sets its superview to be the parent (self), and finally adds it to our mutable array of subviews. Simple enough. But now it comes time to dig in to -removeFromSuperview, and things become a little trickier. At first glance, a very simplified version might look like this:

- (void)removeFromSuperview {
    [self retain];
    // ...
    NSMutableArray *superSubviews = [_superview subviews];
    [superSubviews removeObject:self]; 
    // ... cleanup
    [self release];

It seems simple enough, until you look at the documentation for UIView and realize subviews is declared as a readonly property of type NSArray, and NSArray cannot be modified. So while it's possible to access the subviews of our superview, it's returned to us in an immutable array, so we can't directly remove the current view from it.

Investigate By Example

It just so happens UIView on iOS shares the same fundamental design as NSView on Mac OS X (internally, UIView is different because it's using a hierarchy of CALayers, but as far as the application programmer is concerned, the two view classes are strikingly similar). Looking at the documentation for NSView, we quickly find ourselves in with the same problem: we can't modify the array of subviews given to us by -subviews. But NSView is still useful to us, not for its implementation on Mac OS X, but because of its heritage.

It just so happens the NS of NSView stands for NeXTSTEP, an operating system built in the late 1980s, and later acquired by Apple and transformed into Mac OS X. But before that happened, an open source implementation of the NextStep API was started, and exists today as a project called GNUstep. Also, there exists another open source project called Cocotron, which exists as a re-implementation of the Cocoa API on Windows. The benefit of both these projects is it gives some insight on how Cocoa, and thus NSView, could be implemented (that is not to say they show precisely how Apple does it, but it's probably not terribly far off, especially when it concerns just the basic mechanics, which is what we're focusing on).

With these projects in mind, a simple Google search for “NSView.m” turns up some excellent results. If you peer into the implementations of NSView in either project, you can get a feel for how it works. Unfortunately, GNUstep kind of cheats here, as their -subviews method returns an NSMutableArray, unlike Apple's implementations. However, Cocotron's implementation lead me to the correct answer, and a great lesson about how Objective-C's @protected encapsulation works.

(Brief interlude about gawking at Open Source implementations of AppKit or UIKit in order to learn how they work):

In addition to scouring through the GNUstep and Cocotron source code, either of which may be significantly behind Apple's implementations at any given time, you can also look to 280 North's Cappuccino framework. While Cappuccino is written in Objective-J and not Objective-C, the syntaxes are very similar, and the Cappuccino API is modelled after Cocoa. While they don't match up one-to-one, Cappuccino's API are much more modern and in-tune with Apple's newer offerings. It's another great way to figure out how parts of AppKit (or other first-party Frameworks) might be implemented.

If all else fails, it's always worth Googling “XXAppleClass.m” to see if somebody has re-implemented the class themselves.

The Lesson Learned About Objective-C's Encapsulation Model

The solution ended up being rather simple: an object's instance variables are protected by default in Objective-C. This means the class in which they are declared, and any subclasses thereof, may access the variable directly, but no other class is permitted to do so (Objective-C has a little-used “direct-access-variable” operator ->, but it's ugly and generally frowned-upon. I consider it cheating). Accessing the variable directly from within the class is very simple, if you're dealing in terms of your own instance variables. But when your object has a reference to another object of the same class, you still can't access its instance variables without using the arrow operator.

In our case of -removeFromSuperview, both self and superview are instances of UIView, but we can't simply get superview's instance variables without resorting to the arrow.

Instance methods, on the other hand, are inherently public. If Apple had implemented UIView with an instance method to directly access its _subviews mutable array instance variable, then any class could modify the list at will. There needs to be a way for classes of the same type to be able to modify instance variables, but to not allow for other classes to do so. Public methods are not the answer.

The answer are methods declared in a class extension, which is a way to add essentially private methods to a class. Here is my guess for how UIView does it, using my logic:

// UIView.m
@interface UIView ()
- (NSMutableArray *)privateMutableSubviews;
// ...

@implementation UIView
// ...
- (NSArray *)subviews {
    // The public method
    return [NSArray arrayWithArray:_privateSubviews];

- (NSMutableArray *)privateMutableSubviews {
    // The private method, only used internally
    return _privateSubviews; // the mutable, internal instance variable

- (void)removeFromSuperview {
    [self retain];
    // ...
    [[_superview privateMutableSubviews] removeObject:self];
    // ...
    [self release];

// ...

Because both _superview and self share the same UIView lineage, they both have access to the -privateMutableSubviews declared in the class extension, while no outside classes can.

The data is encapsulated and protected, and only the view hierarchy can touch it.

In the end, this ended up solving the wrong problem for me, but I realized it was a good exploration into the Objective-C programming language and I thought it was worth sharing.

A Followup to UIView Explorations

Since posting my article a few days ago about exploring UIView, and how its -removeFromSuperview method is possibly implemented, I've had more thoughts on it, along with some reader feedback, so I'd like to share those now.

First of all, I originally had incorrectly titled the article with “removeFromSubview”, which isn't the correct name, but neither I nor anybody on Programming Reddit / Hacker News seemed to notice (a visitor via reddit eventually did let me know about the mistake). Perhaps I should proof-read a little better!

Secondly, I've been thinking a bit about my recommendation to explore open source “implementations” of AppKit or other frameworks similar to those Apple publishes. One thing I didn't mention in my original article, but it's good advice, if you're poking around at these implementations, keep in mind this might complicate things if you ever decided to work for Apple. I have no idea what their legal policy is, but I know for many companies, looking at open source implementations of something similar to what you'll be working on can make you “dirty”, meaning since you've seen open source implementations, you could potentially be contaminating their proprietary implementations, which could potentially be a liability. Again, I don't know if this is Apple's policy, or if something like Cocotron would even constitute such a violation, but it would be prudent to keep this in mind.

Finally, I've had some comments from readers about my “guess” implementation of UIView's methods, where I've really simplified it a bit too much. As UIView is really just a lightweight wrapper around Core Animation layers, it doesn't do much work for the view hierarchy on its own. CALayer is in charge of managing the tree. This implementation is quite different than [my understanding of] NSView's implementation, although the public-facing API remains similar.

Another reader commented that UIView might use a private method internally like -_removeSubview:, instead of my more convoluted approach of getting the mutable subviews array, and removing the view from that list. When I think about it this way, it makes a lot more sense. The lesson is the same (instances of the same class can invoke private methods on other instances), but the result is much cleaner, and probably more likely to be correct.

Of course, if I really wanted to verify, there's nothing stopping me from setting some breakpoints and stepping through code, or using classdump to see what UIView is made of. I'll leave that as an exercise for the reader.

AppleInsider: "Motorola hedging Android bet with new web-based OS". AppleInsider:

The report also cited Deutsche Bank analyst Jonathan Goldberg, who said, “I know they're working on it. I think the company recognizes that they need to differentiate and they need options, just in case. Nobody wants to rely on a single supplier.”

This could be interesting. There are plenty of great engineers working at Motorola, so there's certainly the capacity to build something great. But it seems like a bad idea given the Android platform is gaining so much steam. Could the “MotorolaOS” have any chance against that?

Some other points of interest:

  1. The report cites an “analyst”, and analysts never seem to have any real information. I've never heard of an analyst being correct on tech news.

  2. Motorola just released an Android tablet last month. Do you think they're ready to flip gears so fast? If so, does this mean Android is no good for them? Or does it just mean they're hopelessly clueless?

  3. Motorola bought 280 North this past Summer, the company behind the excellent open source Cappuccino framework for developing rich, client-side web apps. Doesn't this sound like the perfect kind of technology one might want to integrate into one's new web-based-OS?

AppleInsider: "Motorola hedging Android bet with new web-based OS. AppleInsider:

The report also cited Deutsche Bank analyst Jonathan Goldberg, who said, “I know they're working on it. I think the company recognizes that they need to differentiate and they need options, just in case. Nobody wants to rely on a single supplier.”

This could be interesting. There are plenty of great engineers working at Motorola, so there's certainly the capacity to build something great. But it seems like a bad idea given the Android platform is gaining so much steam. Could the “MotorolaOS” have any chance against that?

Some other points of interest:

  1. The report cites an “analyst”, and analysts never seem to have any real information. I've never heard of an analyst being correct on tech news.

  2. Motorola just released an Android tablet last month. Do you think they're ready to flip gears so fast? If so, does this mean Android is no good for them? Or does it just mean they're hopelessly clueless?

  3. Motorola bought 280 North this past Summer, the company behind the excellent open source Cappuccino framework for developing rich, client-side web apps. Doesn't this sound like the perfect kind of technology one might want to integrate into one's new web-based-OS?

An Extra Psychedelic Experience with the Electric Sheep screensaver on Mac OS X Snow Leopard

I discovered an interesting trick to enhancing the experience with the Electric Sheep screensaver on Mac OS X Snow Leopard:

  1. Install the Electric Sheep screensaver.
  2. In the Screensaver preference pane of System Preferences, select the Electric Sheep screensaver.
  3. Set your “Start screen saver” time to be something short (3-5 minutes).
  4. Press the “Test” button.

This will start up the screensaver, and you can enjoy the visuals for a few minutes. Due to what I'm guessing is a bug, after a enough time has passed to kick in the real screensaver (the time you set in step 3) will start, but the test screensaver will still be running.

The visuals will rapidly flash and alternate between the test and real screensaver, and it results in a pretty psychedelic experience.

And it will scare the pants off your cats.

Housekeeping: April 2011 Edition

I wanted to take a moment to thank you, reader, for being a reader of my website. I appreciate you.

I started this website in early 2010 in hopes of encouraging myself to write more often, and to become a better writer. I set a modest readership goal for myself, and aspired to write often. I didn't meet those goals in 2010.

I gave this website a reboot in February 2011, and have since done a little better towards my goals. A few statistics:

  1. 139 articles.
  2. 10 feed subscribers in Google Reader (I don't know how many subscribe to the ATOM feed using a different feed reader; I don't track those).
  3. Just shy of 2500 visits in 2011.

By most standards, these statistics are modest, but I am proud of them, and more importantly, I am very thankful to have you as a reader. As my school term winds down, I'm looking forward to pushing myself more in terms of publishing articles and reviews. If you enjoy what you read, please let me know. If you don't enjoy what you read, please also let me know. Your feedback helps make my writing better.


Jason Brennan.

Xcode 4 Colour Themes

The default colour themes installed with Xcode 4 are pretty drab, and leave much to be desired when it comes to legibility. They seem to have been designed by programmers. However, your code doesn't have to look so bleak.

Last week, I discovered Ethan Schoonover's Solarized colour scheme, which is a thoroughly well thought out and designed palette for use in text editors such as Vim, or as general terminal colours. He provided both light and dark themes, which are in a word, beautiful.

Somebody was quick to port the colours to an Xcode 4 theme, although just the Dark theme. Since the themes were available on GitHub, I decided to fork Varikin's project and add my own Light theme (Varikin did the hard work — the Light theme only differs by two colours). My changes have been accepted back in to the original Xcode fork.

Until Xcode 4, I'd been using a different theme, Humane, with minor modifications. But as the theme format changed with Xcode 4, I just defaulted to using the standard colour theme. After some digging around this week, I discovered a tool to upgrade Xcode 3 themes to the newer format. All three themes use Menlo 12 pt., although 11 pt. works well, too.

Comparing my modified Humane theme with the Solarize Light theme, I certainly prefer the latter. It's just better designed. However, I thought I'd share all three, and the conversion script, to anyone interested in making Xcode easier on the eyes.

I've grouped the themes together in a GitHub repository that you can browse at your leisure. The ReadMe also shows a preview image of what each of the three themes look like while editing Objective-C, so you can get an idea of how it might look with your code. Also included in the ReadMe are instructions for installing the themes.

If you make any modifications to the themes, please fork the project on GitHub and be sure to let me know if you make any improvements. If you have other themes you think are worthy of this collection, let me know as well.

This Week's Speed of Light Outage

You may not have even noticed but the website underwent some strangeness this week. Here's what happened:

  1. I posted a link to some amazing Star Wars posters from my laptop while I was away.

  2. When I pushed the article to the website, my migrating script (Migratron) crashed and mashed up database.

  3. No big deal, because how Colophon works, right? I'll just wipe the database and re-migrate everything. Well not so fast, the script kept crashing and some articles just wouldn't show up.

So what happened? Well it turned out the copy of the app I use to edit my posts was outdated on my laptop, and so there was a bug in the copy which corrupted my article file, which crashed my migration script.

Thankfully, because my entire site is kept in a git repository, all it took to go back before the error was a git revert $COMMIT_HASH.

The moral of the story is to make sure you keep your software up-to-date and also use source control when ever possible!

LayerStyles Layer Builder for CSS. A really neat web app to define CSS div styles using a “Layer Style” window similar to Photoshop, with a live preview in the background. The style information can then be exported as CSS.

Xcode 4 Code Snippets

Perhaps my favourite feature of Xcode 4 is the Code Snippets feature. It allows you to use common bits quickly in your code, instead of requiring you retype them over and over again.

For example, every Objective-C class requires you to provide a dealloc method to clean up any memory you might be hanging on to (of course this is not needed if your subclass hasn't done anything it needs to clean up, but generally you should be releasing or freeing any memory you've asked for in the class if appropriate). One mistake I've often made is the following:

- (void)dealloc {
    [someObject release];

While this looks fine, the problem is I've neglected to invoke the super's implementation of the method (this ought to generate a compiler error, because you compile with -werror, right?). To avoid errors like this, Xcode provides a snippet which demonstrates the proper pattern, has a placeholder for your code, and is available by simply typing “dealloc” and accepting the completion suggestion. It generates the following code for you:

- (void)dealloc {
    (deallocations) // (code) will appear as a blue bubble.
                              // pressing return will accept the bubble as text
    [super dealloc];

Xcode provides a bunch of these Code Snippets, which you can find by opening the Utilities View on the right of your window. Near the bottom, you'll find the Code Snippets Library, in addition to the File Template Library (which works similarly). Snippets can be broken down by platform (Mac OS X, iOS) or viewed all together. Look through the available snippets to see how they might save you from repetitive typing in your code.

The provided snippets are useful, but the cause for my adoration stems from being able to add my own snippets to the library. I've found this to be incredibly useful.

  1. Find some code you'd like to be a snippet and select it.

  2. Drag the selected code to the Snippet Library.

  3. Give it a title and optionally a summary. These are used when browsing through the Snippet Library list.

  4. Choose a completion shortcut to invoke this snippet. My preference is to use the same letter, repeated three times (ie ddd) because it's quick to type, and it's very unlikely to conflict with another completion symbol in my code. Not only does it avoid direct conflicts, but it also lets me filter down the list of suggestions quickly, so I can accept the completion as quickly as possible.

  5. Chose a platform and language, and Completion scope (defaults to the current scope) which lets you choose when the completion shortcut will be available (it wouldn't make sense for your custom initializer method snippet to appear in an Objective-C @interface).

  6. Finally, you may wish to edit the snippet code you've added. The mini-editor is syntax highlighted for you, although unfortunately you can't invoke other snippet completion shortcuts here! Sometimes trying this causes Xcode to have an aneurism.

  7. Bonus: If you'd like your own “blue code bubbles” to appear as part of your snippet, just type the following in your snippet: <#text in the bubble#> When the user invokes the shortcut for the snippet, the inserted text will contain any blue code bubbles you've included in the hash tags. They can be tabbed back and forth (that is, pressing tab will select the next bubble) and can be accepted by pressing Return. You'll see why these are useful in my examples.

Snippets I find useful

Retain/Assign property: Completion shortcuts of rrr and aaa respectively.

@property (nonatomic, retain) <#type#> *<#name#>;
@property (nonatomic, assign) <#type#> <#name#>;

Synthesize for a Custom ivar name: Completion shortcut sss

@synthesize <#property#> = _<#propertyIvar#>;

Useful when you have a custom naming scheme for your instance variables (such as _ivar or mMemberVariable).

Class extension interface: Shortcut eee

@interface <#class name#> ()

Pragma line and name: Shortcut ppp. This could alternatively be on one line. Also, when naming your mark label, write out the symbol name in full and it will be CMD+DoubleClickable, ie UITableViewDelegate methods instead of Table view delegate methods

#pragma mark -
#pragma mark <#Label#>

Define macro: Shortcut ddd. Xcode will autocomplete #define for you out of the box, but I find it slower and it doesn't create tabbable bubbles for you for both arguments. I find adding this snippet to be much more efficient.

#define <#name#> <#substitution#>

While code reuse is an incredibly important factor, there are some bits you just can't help but write over and over again. The Code Snippet library can seriously reduce the hassle of this.

Public Service Announcement Cautioning Against Negative Values in CGRect. From the Documentation (emphasis mine):

The height and width stored in a CGRect data structure can be negative. For example, a rectangle with an origin of [0.0, 0.0] and a size of [10.0,10.0] is exactly equivalent to a rectangle with an origin of [10.0, 10.0] and a size of [-10.0,-10.0]. Your application can standardize a rectangle—that is, ensure that the height and width are stored as positive values—by calling the CGRectStandardize function. All functions described in this reference that take CGRect data structures as inputs implicitly standardize those rectangles before calculating their results. For this reason, your applications should avoid directly reading and writing the data stored in the CGRect data structure. Instead, use the functions described here to manipulate rectangles and to retrieve their characteristics.

Good to know.

Code Indentation and Nesting. Ash Furrow on avoiding unnecessary code indentation:

Since first year when a sage upper year student showed me how to avoid unnecessary indentation in my code, it's something I've tried to do. I won't correct existing code since it doesn't improve performance, but I structure my code to avoid superfluous tabstops.

I have a similar habit, but it's not so much in the name of avoiding indentation as it is avoiding nesting. At first glance, the two appear to be similar (they commonly have an identical appearance, even). But the crucial difference, as is often the case with program writing, is semantic.

The greatest benefit of this style is bailing early. Instead of deeply nesting your code (and thus heavily indenting it), you have simple statements designed to end execution in as few instructions as possible, and designed to be as simple to follow as possible. Consider the examples:

- (void)doSomethingWithString:(NSString *)s {
    if (nil != s) {
        if ([s length] > 0) {
            NSLog(@"%@", s);

// VS
- (void)doSomethingWithString:(NSString *)s {
    if (nil == s)

    if (![s length])

    NSLog(@"%@", s);

Astute readers will notice a few things:

  1. My second example is actually longer than the first.
  2. My second example is more readable. Imagine there were 5 conditions I needed to check before I could execute the main section of the method. How ghastly would the nesting be in such a case?
  3. The nil and length checks, combined, are redundant in this particular example (because nil responds to messages by returning nil ie 0x0 ie 0, calling [s length] would return 0 for both an empty or nil string). It's contrived but hopefully illustrative.

There is of course the whole other aspect of this particular “bailing early” style when it comes to memory management. Extra care must sometimes be taken if you adopt this style. That will sometimes mean using autorelease more frequently, or it might mean you repeating your release code in multiple places to avoid leaking allocated objects. This is rare in practice, I've found, but it is something to keep in mind.

Further reading by Wil Shipley.

Chrome For Mac now supports Cmd+Ctrl+D for Dictionary Lookup Popup. This was perhaps the only thing keeping me from using Chrome on the Mac. Time to give it another shot.


This is now done for Chrome 13 :-). There's a few minor issues remaining, but the bulk of the work is done and you can now lookup words using Cmd+Ctrl+D.

Microsoft previews Windows 8 with Demo Video. Microsoft showed off a preview of Windows 8 tonight at the WSJ's D9 conference. From the press release:

  • Fast launching of apps from a tile-based Start screen, which replaces the Windows Start menu with a customizable, scalable full-screen view of apps.
  • Live tiles with notifications, showing always up-to-date information from your apps.
  • Fluid, natural switching between running apps.

The interface looks similar to Windows Phone's Metro UI, which I think looks rather nice, and much better than the current Windows 7 cheap-see-through-plastic aesthetic.

New apps are built with HTML, which leads me to believe Microsoft is finally treating the Web as an important platform, at least from a technologies point of view. It will be interesting to see how this turns out.

One negative point, however, is the way this was introduced. This is a major re-imagining of the Windows desktop user interface, and they unveil it at a WSJ conference and a lousy press release? The video included at the end is of very low production value, too. This is a Big Deal, but it looks like it's amateur hour.

This sort of unveiling deserves a big press event. Imagine how Apple would have treated this.

Be sure to watch the video though, to see it in action.

The Shift towards Responsive Web Design. Responsive Web Design, a term I believe first coined by Ethan on A List Apart, is the term used to describe a web design philosophy centred around making web pages fluid and elastic, able to scale their content depending on the size of the page, re-arranging pieces as the page size changes.

One of the greatest examples of this would is Jon Hick's site. Go ahead and give the site a visit. Fire up your least favourite web browser, so you can resize the window (this is the sole reason I have Firefox installed, because I need a browser I can haphazardly resize). As you resize the window, you'll see the elements of his page move around and resize, all the while maximizing the window real-estate provided to it.

Of course, as you're browsing, you typically don't resize your browser window very often (if at all), so this may not seem so important. It seems more important when you realize just how many different screen sizes there are out there for your audience. These sizes range all the way from a 30 inch display, down to a mobile display, and everywhere in between. Responsive Web Design is about designing responsibly for your message is communicated effectively to every member of your audience, regardless of the device they choose to consume it with.

By using the techniques in Ethan's article, you can effectively scale your pages without resorting to lowly hacks like User-Agent sniffing, and without providing “special” mobile versions of your site. I've yet to use a “mobile” site that wasn't a terrible experience, because I'm either redirected to the homepage, or unable to view the content properly. With RWD, you've got one codebase, and one set of interactions.

The piece I've linked to by Mark Boulton really sums up my feelings about RWD, even though I'm not a designer. I'm a web user, and a rather snobby one at that, I'll admit. It's nice to read pages where care was taken for me, the reader. And while the techniques aren't perfect yet, we're trying. Mark puts my thoughts to words:

Like it or not, web design is maturing to a state of recognising the importance of content, presenting that content in different contexts that are appropriate for the users of those websites and applications. We’re also challenging web design practice that has been around for 15 years or so, and graphic and typographic design practice that has been around for close to a 1000 years!

I've recently done a little more work with web design, and I've learned a bunch of great things about how to shape a page, and how to keep that page responsive to different browser environments, and I plan to start applying those lessons to this website, as well.

The site scales down well to smaller devices, but it could be better. By removing all unnecessary content from the pages of this site, I've made the emphasis solely on the content. And as I continue to tweak it, I'll make that content even easier to consume. If you notice anything breaking in the next weeks, please feel free to let me know.

GitHub for Mac. GitHub has just released a new Mac client for interfacing with their webservice:

At GitHub, we think that sharing code should be as simple as possible. That’s why we created GitHub for Mac.

The beauty is, it works with non-GitHub repositories, too. You can manage all your local repos, make commits, see diffs, push and pull, and they can all be private repositories on your own servers.

The are some bugs, but overall, it's a nice release. See also their blog post.

Teaser poster for 'Brave', Pixar's next Film. It comes out June 2012. Director Mark Andrews:

“What we want to get across [with the teaser] is that this story has some darker elements,” director Mark Andrews tells EW. “Not to frighten off our Pixar fans — we’ll still have all the comedy and the great characters. But we get a little bit more intense here.”

Interview with Francisco Tolmasky, creator of Objective-J and Cappuccino. Great interview with one of the creator's of Objective-J, which is like the Objective-C language used for iOS and OS X development, but for the web.

On “long method names”:

The fact of the matter is most programming is reading code, not writing it, so I don’t understand why you’d ever optimize for the 20% case. Objective-C/J really reads like english, you know exactly what’s going on most the time, and most importantly, you don’t have floating arguments in a method that you can only understand by looking up documentation. How many times have you seen the last argument of a method be “true” or “false”, and have had no idea what it could apply to? That never happens in Objective-C/J.

Connected Disconnect. Great piece by Stephen Hackett on Forkbomber about the concerns of a parent in the iPod generation:

This weekend, my wife and I enjoyed a weekend away in Franklin, TN. While out to eat for Sunday brunch, I noticed something that really got under my skin.

At the restaurant, there were — as you could imagine — several tables with large, multigenerational families enjoying brunch after church. At many of these tables, the kids weren’t talking with their parents, aunts and uncles and grandparents, however.

They were glued to iPod touches.

I'm not a parent, but I can sympathize.

When I was a kid, my father's house had this rule I loved: When we're eating supper, nobody answers the phone or watches the television. We sit and we talk with each other and we eat.

I loved it because nobody could interrupt our time together. If the phone rang, nobody would answer. If the call was important, they'd leave a message or call back. What a novel concept.

This is a rule I've tried to follow on my own, and it's why I might have missed your call at super time. I'll call back.

It seems these moments of quietness bring about them an aura of being disconnected from the world. But in fact, it's the other way around: it's about being most connected to the world near you.

So next time you're having a meal with your loved ones, don't answer the phone (or check email or Twitter or Facebook, or…).

iPhone 2011

The following is pure conjecture, derived from an obsessive knowledge of Apple's past behaviour with iOS, and in no way reflects any kind of insider information (although…). This is my best guess as to what Apple will do this Fall.

Typically, Apple previews the new major version of iOS in late March or early April. They do this so they can get it in developers' hands to have it nice and tested for release. The developers also benefit by having apps ready and supporting the new features of the new iOS version.

The betas continue usually until about WWDC in early June, where Apple unveils a new iPhone model, and announces a ship date for the new major version of iOS, usually by the end of June. Both iPhone and new iOS version typically ship around the end of June, more or less at the same time (with the iPhone always being no earlier than the OS).

At some point in the middle of the Summer, Apple typically begins previewing the first minor update to iOS, version x.1, which tends to have a few incremental features. Developers are given betas throughout the Summer, culminating in the first week of September, when Apple has it's annual “Music Event”, releasing updated iPod touch models and shipping the minor update to iOS.

This has been how it has happened since the inception of iOS. This year, things have changed.

There was no iOS major version preview event in the Spring. The preview happened at WWDC instead, and there was no new iPhone at WWDC, either. Now we're in late-Summer, and we're in the thick of iOS 5 developer previews with no new iPhone announced. All signs are pointing to iOS 5, a new iPod touch, and a new iPhone at this year's Early September Music Event, but why such a shift in plans?

Apple's yearly schedule is a great strength, as developers, the press, and consumers come to become familiar with it, and even plan on it. As the iPhone is Apple's sole smartphone, it must compete with the rest of the market, against many phones. There's only one iPhone per year, and it's got to matter. Since iPhone 4 was released July 2010, Apple's competition has grown substantially.

One part of the equation has most certainly delayed the rest. But which is it, the iOS or the iPhone? In the past, Apple has released new devices without a never version of iOS to go along with it (see iPad 2, which was released this year without even a minor version change of iOS). If this year's iPhone had been ready for sale in June, but iOS 5 still needed work, I believe iPhone would have seen release then, running iOS 4.3, version 5 would have been previewed at WWDC just the same.

If Apple had no reservations about releasing a new device when the software isn't new, then it must be the hardware holding things back. Some earlier rumours of the new iPhone claimed it would be an iPhone 4S, essentially the same iPhone as the current model, identical from the outside, with upgraded components on the inside. This seemed plausible until no iPhone showed up at WWDC. If the new model were just a spec bump, why would there be any delay?

There must, then, be something at least slightly radical about the new iPhone, something which Apple certainly did not expect to cause such an impressive delay. Some people might remember the supposed Antenna “problems” the iPhone 4 experienced at launch time. Though the problems existed almost solely in headlines, many still think Apple has spent the last 12-15 months completely revamping the antenna designs, and this is the cause of the delay.

Quick interjection pertaining to the supposed dearth of “new” iPhones in 2011 thus far

While the yet-unnamed 2011 iPhone is delayed beyond the typical yearly iPhone schedule, Apple hasn't been idle in the months since iPhone 4's release. In fact, there have been two new iPhone 4 models in the interim: the CDMA (Verizon) iPhone 4, and the White iPhone 4 (which was originally to ship shortly after the iPhone 4 launch in Summer 2010).

The curious case of the curious case

The Verizon iPhone 4 is cosmetically identical to the GSM iPhone 4 released months earlier, and it like the original, bore no traces of any serious antenna design flaws. The design of the antenna is thus adequate as is, and is not in need of major redesign. This is unlikely to be the cause of the delay.

The other iPhone 4, the white one, I think is more important. It was originally intended to be released just a month or so after the iPhone 4 in 2010, but in an uncharacteristic screw up, wasn't released until the Spring of this year. It seems to have taken an extra amount of time to perfect the seemingly trivial art of a case in a colour other than black.

iPhones, and particularly iOS, have pretty much every hardware and software feature between them. There's not much left to differentiate iPhone hardware from one year to the next other than fashion (notice how many colours of the iPhone Bumpers are available). It's exactly what they did with iPod nano, and that's precisely what I think they're doing with the new iPhone. Having finally perfected getting a white case, my guess is they wanted to take their time to perfect a spectrum of new iPhone cases, and the whole thing took them much longer than initially planned.

Of course, I also expect their to be specification bumps, but I can't imagine many other groundbreaking hardware features.

How many smartphones (iPhones or otherwise) have you seen toting a pink rubber? Consumer electronics have entered the fashion lexicon, and Apple wants this year's iPhone to be on the tip of your tongue. Expect a very colourful Fall.

Review of "Programming iOS 4" by Matt Neuburg

I teach a series of iOS Developer courses in the city of Ottawa, ranging from Beginner to Advanced, and covering topics just about everywhere in between. The course is mostly based on content I've written myself, but it also depends on a textbook for supplimental material.

When the courses started in November 2010, the textbook I chose for the course was Erica Sadun's iPhone Developer's Cookbook. Though book focused on iOS 3, as iOS 4 was still novel enough to not have any published books at the time, most of it was still very relevant, even as the course began with iOS 4.1. However, I was never completely satisfied with the book. I found it focused on the wrong types of tasks, had questionable code examples, and just seemed to go in the wrong direction. Of course, being a Cookbook, it wasn't exactly intended to be a complete tutorial. It was, however, the best choice at the time.

In the Spring of 2011, I began my search for a newer and better textbook to be used for the Summer's offerings, hopefully one which could cover Beginner-to-Advanced topics. I was lent a review copy of Matt Neuburg's Programming iOS 4 from O'Reilly publishing, to evaluate for my future class offerings.

I immediately fell in love with this book. Even from the table of contents, I could tell this was going to be the book for my course going forward.

The book, which is relevant up to iOS 4.3 (the most recent shipping version of iOS even as of this writing), and uses Xcode 4, also the most recent version of the development environment, covers essentially every single topic a developer will need to go from ignorance to bliss with the iOS platform.

Matt sums up the book rather well in his preface:

The widespread eagerness to program iOS, however, though delightful on the one hand, has also fostered a certain tendency to try to run without first learning to walk. iOS gives the programmer mighty powers that can seem as limitless as imagination itself, but it also has fundamentals. I often see questions online from programmers who are evidently deep into the creation of some interesting app, but who are stymied in a way that reveals quite clearly that they are unfamiliar with the basics of the very world in which they are so happily cavorting.

It is this state of affairs that has motivated me to write this book, which is intended to ground the reader in the fundamentals of iOS.

This 810-printed-page tome starts at the fundamentals, with a whole group of chapters forming a Language section: beginning with “Just enough C”, “Object Oriented Programming” and finally three more chapters introducing you to Objects in Objective-C, Classes, and other language features. These chapters provide the foundation for the rest of the book.

The next section prepares the reader for dealing with the Xcode development environment. Not only does Neuburg lay out the fundamentals of the Xcode app itself, he also goes through the parts of an Xcode project, which many learners find bewildering. Neuburg helps demystify it.

Additionally, and perhaps most commonly overlooked in most educational resources for iOS (excluding my course, natch), he devotes an entire chapter to Apple's provided Documentation, not only online, but as it exists in Xcode. He details how to read a Class Resource doc, and also where to look for help, be it a Programming Guide, Header file, or even Sample Code. Apple's Documentation is a crucial part to any iOS developer's toolkit, and it is finally given the attention it deserves in this book.

After nearly 200 pages, the book introduces the Cocoa section, with chapters detailing more advanced Language features like Categories and Protocols, and it also begins to focus on the crucial Foundation Kit of model classes. These classes are essential no matter which app you're building, and Neuburg gives them a thorough going over.

Continuing in the Cocoa section, the book moves on to describe the Cocoa Event's system, an incredible overview of the Memory Management rules for Cocoa, and some of the design patterns used throughout the frameworks such as Model-View-Controller and Delegation.

The next section moves on the the View and Drawing system of iOS, with chapters detailing UIView and its subclasses, how Animation and Drawing work, and the important relationship between Core Animation layers and the UIView class. Touch handling (include Gesture Recognizers) is also given a thorough detailing.

Neuburg continues to deliver on higher level and important sections, such as the Interface section, covering View Controllers and their role in applications, Scrolling views and Table lists (perhaps one of the most important classes of an iOS Application), the Text system, and a plethora of other commonly used interface elements.

The last two sections of this textbook focus on most all other frameworks of Cocoa touch, and topics like persistence, some networking and multithreading. About the only topic not covered in this book is Core Data, though it could easily be an entire book unto itself.

Programming iOS 4 is perhaps the best iOS book I've ever encountered, acting as the perfect companion to either an iOS bootcamp course, like the one I offer, or just a companion to solo journeys. It provides a sturdy foundation, and then builds on it. Even if you consider yourself an iOS expert, you still stand to learn something from this book, just like I did.

Britain's Proposed Banning of Social Media to Quell London Riots. Via John Gruber, who quotes:

In the wake of the riots in London, the British government says it’s considering shutting down access to social networks — as well as Research In Motion’s BlackBerry messenger service — and is asking the companies involved to help. Prime Minister David Cameron said not only is his government considering banning individuals from social media if they are suspected of causing disorder, but it has asked Twitter and other providers to take down posts that are contributing to “unrest.”

It should be noted, we Canadians have some protection against this. When I was a volunteer broadcaster at CHSR, the rules were spelled out rather plainly:

You're allowed to call for the overthrow of the government on-air, but you're not allowed to call for the violent overthrow of the government.

Replace “On-Air” with “Social Media” and you've got an up-to-date version of our sedition laws.

Lion is a Winner

Matt Neuburg (yes, that same Matt Neuburg whose book I just gave a glowing review) has a piece up on TidBITS about the new feature in OS X Lion called Automatic Termination. As the name implies, this allows for an app to pseudo-Quit on you without your explicit intervention. As cited in the article, John Siracusa has a concise rundown of the feature and when it occurs in his herculean review of Lion:

Lion will quit your running applications behind your back if it decides it needs the resources, and if you don’t appear to be using them. The heuristic for determining whether an application is “in use” is very conservative: it must not be the active application, it must have no visible, non-minimized windows — and, of course, it must explicitly support Automatic Termination.

As Neuburg notes, Siracusa elaborates the implications of this feature. In short, the app only becomes half-quit, residing in a sort of Application Purgatory. It still has technical bits in memory, but it's not executing. It won't show up in your Dock or in your Application Switcher (Command+Tab), but the system still knows about it.

Read the rest of Neuburg's article for the straight facts of Automatic Termination as it exists today in Lion. It's his opinion I'm taking issue with.

Neuburg asks:

In particular, I propose to discuss the big question of whether Automatic Termination is a good thing.

He goes on to say, from a technical standpoint, he doesn't believe it's much of an improvement, because the App Purgatory isn't really saving the computer much in terms of resources or time. But I think this is the wrong way to look at the problem, technically, because “saving resources” in this case means memory, and “saved memory” in operating system terms is wasted memory. There's no sense in having unused memory. When the operating system needs more, it will page for more, like every major operating system has done for years.

Neuburg then turns his attention to the User Experience aspect of Automatic Termination, which I think is its real purpose.

The ultimate goal here is to annihilate the distinction between running and non-running applications. Consider how, on iOS 4, with multitasking, you don’t know what applications are running, because you usually don’t quit an application yourself (you switch away from it and allow the system to tell it to quit if needed), and you don’t care what applications are running, because both running and non-running applications are listed in the Fast App Switcher, and because apps are supposed to save and restore their state […]

Precisely. This is exactly what Apple is after. That the solution more efficiently manages system resources is a bonus. The real purpose is to alleviate the user's maintenance work.

He goes on to make some good points about the current implementation, and where it falls short. His experience with it involved Lion quitting an app while he was still using it, causing him surprise and frustration. For an inexperienced user, having an app suddenly disappear from the Dock would certainly cause confusion, and probably lead the user to think the app has crashed.

But then Neuburg reaches what I think is the nut of his argument [emphasis mine]:

Moreover, there’s a larger question at stake: Who, precisely, is in charge? I think it should be me, but Lion disagrees — and not in this respect alone. Automatic Termination is merely one aspect of an overall “nanny state” philosophy characteristic of Lion, and one which I find objectionable. When I tell an application to run, I mean it to run, until I tell it to quit; Lion thinks it knows better, and terminates the application for me. Conversely, when I tell an application to quit, I mean it to quit; but again, Lion thinks it knows better, restoring the application’s windows when the application launches again, and relaunching the application if I restart the computer. By the same token, when I tell an application to save, I expect it to save, and when I don’t tell an application to save, I expect it not to save; again, Lion wants to abolish a distinction and a choice that I think should be up to me.

This is where I disagree almost fully. It is an operating system's job to not only manage system resources efficiently, but also to service the user of the machine. By doing so, it's supposed to alleviate all the stresses and friction involved in owning a computer. It's hard to imagine as power users, but for average users, the task of remembering which apps are running and which are quit is tedious, arduous, and at times seemingly sisyphean. It's difficult for them, and it's not because they're stupid. It's because system designers have thus far been stupid, and have not catered to the needs of an average human computer user. When all the windows of an app are gone, so too should be the window as far as the user is concerned. The user shouldn't have to know the difference between “an app without windows” and “an app which has been quit”.

Apple is finally catching OS X up to the standards of iOS, and it's going to be a bumpy transition. The current implementation in Lion is not perfect. It ought to be more transparent, and the app shouldn't disappear just because it falls into App Purgatory (it should instead faking being active; a slight delay while it springs back to life has been shown to be nearly imperceptible to users. See Jef Raskin's The Humane Interface for more).

OS X brings with it decades of operating systems catering to computers instead of humans, and it's about time this reversed. Lion is the first step in this direction.

Lessons from last night's Speed of Light Outage

You hopefully didn't notice, but the site took a pretty serious hit late last night and was down between about 11:30PM through till about 4PM today. At various points during the outage, some articles were available, some duplicated, and all of them out of order. This also affected the ATOM feed for subscribers. It's all back to normal now. Here's what happened.

Around 11:15PM last night, I had just finished writing a rebuttal to Matt Neuburg's post “Lion is a Quitter”. The editing happens in a homemade app called Editor. I saved my article and began the publishing process. Something on the server crashed (an uncaught error) and the article didn't make it into the the publishing system.

As you may recall from an earlier article, I described the basic layout of the publishing engine I wrote, Colophon, which drives this site (among others):

Articles are written in plain text files, formatted as markdown, along with a second JSON file, which describes some metadata about the article (title, creation and modification dates, etc.). These files combined are the raw source to all the articles of my website.

Instead of storing articles directly in a master database, they're stored in normal text files, which can be edited from any text editor. To publish them, I add them to my website's git repository and push the changes to my server. This triggers a migration script.

The script reads through all the changed files, then adds them to a database. The database is what's queried when you request a page. The database is more or less transient, in the sense I don't really care if it gets screwed up, because I can just wipe it and remigrate all the articles again with no harm done. This has happened countless times before.

Up until now, this is not without precedent. I just re-migrated all my articles and things should have been fine. But it turns out there was a second, more nefarious problem I wasn't aware of: My host had updated some of the software upon which Colophon depends, introducing a bug in the date parsing code (the dates in my metadata files can be natural language. If I wanted, I could use a date such as “yesterday at noon” and it would correctly parse that. Obviously, that would be silly because “yesterday” is a moving target. But it makes for more readable dates). This is what caused the articles to start appearing out of order.

The reason why they appeared multiple times was because it took me a while to track this down, all the while things were getting duplicated because my script wasn't ready to handle that case.

The bigger problem

The real issue here isn't so much the nature of Colophon, and it isn't so much my host changing things underneath me, it's the lack of good tools.

Having rolled my own publishing software means I have to roll my own tools, too. It's not a perfect solution, but it's one I enjoy. I wouldn't recommend it for many other people though.

The current tool I have is old and not entirely working on Lion. And it now has to support another website, which it isn't cut out to do. I'll report back when I've fixed this problem.

Brent Simmons: Less code, less effort. Some great Cocoa-related tips from Brent Simmons. Well worth the read. One thing I did not know about is NSCache:

It’s super-easy to use — like an NSMutableDictionary, it responds to objectForKey: and setObject:forKey:.

Particularly cool is that an NSCache handles removing items itself. I don’t have to think about it. It means fewer things I need to do when getting a memory-warning message.

Rembrandt drawing stolen from Hotel. Yet another art heist:

A 11-by-6-inch pen-and-ink drawing titled 'The Judgment' by the 17th century Dutch master is valued at more than $250,000. Authorities say a man working with accomplices is believed responsible for the theft late Saturday.

Also interesting:

Art experts reached Sunday said works by Rembrandt are among the most popular targets for art thieves, second only to those by Picasso, because of the artist's name recognition and their value. Anthony Amore, chief investigator at the Isabella Stewart Gardner Museum in Boston and co-author of the book “Stealing Rembrandts,” said there have been 81 documented thefts of the artist's work in the last 100 years.

(via Arron Cohen on kottke.org, who was hosting Kottke during the aforelinked Paris heist)

A New York Times Review of David Foster Wallace's "The Broom of the System". A well-rounded review of the book I just finished reading, by the New York Times, originally published March 1, 1987:

Cleveland in 1990 - the setting for most of the story - borders the Great Ohio Desert (or G.O.D.), a man-made area filled with black sand, meant to restore a sense of the sinister to Midwestern life. The confused heroine is 24-year-old Lenore Stonecipher Beadsman who, despite her proximity to the sinister G.O.D., has a ridiculous life. Her friends hang out at Gilligans Isle, a theme bar; her grandfather designed a Cleveland suburb in the precise outline of Jayne Mansfield's body; her brother Stonecipher Beadsman IV is nicknamed the Antichrist; and her boyfriend, Rick Vigorous, half of the publishing firm of Frequent and Vigorous, is insane about Lenore but impotent when he's with her, so he tells her stories as a substitute for sex.

It's great fun and something of a linguistic marathon.

Four Ways We May, or May Not, Evolve. Thought provoking piece from National Geographic on the possible future history of Human Evolution. I find the first two arguments compelling, but the last two kind of bunk.

On the argument for continued human evolution:

“We're also going to get stronger sexual selection, because the more advanced the technology gets, the greater an effect general intelligence will have on each individual's economic and social success, because as technology gets more complex, you need more intelligence to master it,” he said.

Sounds like Technical Selection.

Disruptive - An open letter about tablets from Minimal Mac. Great piece from Patrick Rhone about how other companies should deal with the iPad. One part in particular rang true with my sentiments:

Oh, Google, sit down and shut the eff up because I’m talking to you too. You are the company that names your beta builds after candy, ice cream, and sugared cereals. Apple names their betas after things that will eat your things along with the tasty human wrapper that eats that crap. Do you honestly think anyone can take you seriously?

They all keep making turds and nobody seems to get it.

A Day in the Life of the Modern San Franciscan. A not-altogether-foreign typical day:

My alarm clock goes off. Presumably on my iPhone 4, because it’s very important to me that I own the latest technology. I hit snooze. I can’t believe I have to get up by 9 a.m. to make it to my place of work before 10 a.m. where I am paid to be creative and knowledgeable about “the internet,” just in general.

I check Twitter.

It sums me up kind of well, too. Read the whole thing through, and think about it.

We’re so interesting.

Jack Layton's Letter to Canadians. It's a fantastic read. I won't spoil the ending, but here's the penultimate paragraph:

And finally, to all Canadians: Canada is a great country, one of the hopes of the world. We can be a better one – a country of greater equality, justice, and opportunity. We can build a prosperous economy and a society that shares its benefits more fairly. We can look after our seniors. We can offer better futures for our children. We can do our part to save the world’s environment. We can restore our good name in the world. We can do all of these things because we finally have a party system at the national level where there are real choices; where your vote matters; where working for change can actually bring about change. In the months and years to come, New Democrats will put a compelling new alternative to you. My colleagues in our party are an impressive, committed team. Give them a careful hearing; consider the alternatives; and consider that we can be a better, fairer, more equal country by working together. Don’t let them tell you it can’t be done.

Jack Layton passed this morning. He was 61.

HP to Quell Rumours Of Returning Sanity Of Upper Management By Selling More TouchPads At Loss. HP has transcended stupid and has now entered aggravatingly dumb.

Despite announcing an end to manufacturing webOS hardware, we have decided to produce one last run of TouchPads to meet unfulfilled demand. We don’t know exactly when these units will be available or how many we’ll get, and we can’t promise we’ll have enough for everyone. We do know that it will be at least a few weeks before you can purchase.

Device started at $499. Device was canned, and fire-saled at $99, an incredible revenue loss. Device sells out. HP will now be making more devices, presumably continuing to be sold at a loss.

You can't explain that.

Speed of Light Refined

Readers who visit the site might notice some slight modifications to the site's design, and I'd like to take a moment to go over some of them. While the changes aren't significant enough, visually, to call this a redesign, the whole underbelly of the site's style and some fundamental pieces of its markup have been refined and re-written.


The markup for Speed of Light hasn't changed too much; it's always been following the additions of HTML [5], but now the elements are nested a little better, in addition to some CSS-element hints for a better layout, which I'll get to shortly. Nothing too exciting here.

At the bottom of the main page, I've also removed the “Older/Newer Articles” link. My analytics told me almost nobody used them. Instead, I'm trying something different: a link to the entire archive. Yes, this means the page might take longer to load, but I find it much more convenient when I'm looking for something. I might tinker with this in the future, but for now I find it much handier than an arbitrary amount of articles per page.


The style for this website has been completely re-written from the ground up, though you'll notice the end result is quite similar to what I already had. The most noticeable difference is the mast header, which is no long trapped inside its black box. It has much more room to breathe.

You might also notice the page itself just has far more room to breathe, with a better vertical rhythm. I grabbed a copy of The Elements of Typographic Style this summer, and this was the perfect application for it. In short, the lines and the spaces between them are no longer arbitrary. The whole design has been completely calculated and measured, and I think is much nicer on the eyes as a result.

I've also shrunk the text a little bit, though I'm not sure if I'll keep it so small. The result is body text has gone from around 19px to 16px. I generally tend towards larger type sizes, but I'm going to go with this for the time being. If you find it too small, get in touch. Alternatively, if you're using Safari on Lion, you can two-finger double-tap to have the articles zoomed in.

Responsive Design

The biggest change of all, you probably won't even notice: I've switched to using an open source CSS project called 1140px, which defaults to being more responsive when it comes to different window sizes. This means, the site should scale well, no matter if you're using a 30 or 3 inch display. This also means the site scales down well on mobile devices, such as iOS, just fine. Android devices are still yet to be seen. My guess is it will render well, but with terrible typography.

So try it out, and let me know how it looks, and if there are any major problems with it, be sure to let me know. I'll continue to tweak it in the coming weeks, but that is in short version 2.5 of the design for Speed of Light.

"Is Google Making Us Stupid?". An older piece by Nicholas Carr about the effects of computers and the Web on our brains.

I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

It's a long, but great piece contemplating the mental and neurological effects of new technologies (for example, was it a struggle to read through the entire article without being distracted?). There's little doubt in my mind of these changes, but I don't think they're inherently bad, either.

18th-Century Ship Found At Trade Center Site September 2010. I read about this the other day. Fascinating:

On Tuesday morning, workers excavating the site of the underground vehicle security center for the future World Trade Center hit a row of sturdy, upright wood timbers, regularly spaced, sticking out of a briny gray muck flecked with oyster shells.

Introducing JBContainedURLConnection: A simplified, Blocks-based Wrapper Around NSURLConnection. If you build a project of any moderate size on iOS or the Mac, you'll almost certainly need to use NSURLConnection at some point to load a resource asynchronously from the web (typically from a web service) (and you'll never do this with a method like -loadDataFromURL:, because that blocks the main UI queue).

You'll follow the same basic pattern every time: build a string of the resource you're trying to grab, build an NSURL, then an NSHTTPRequest, then finally start an NSURLConnection, setting yourself as the delegate, and have it fire off. Then you implement the same basic delegate callbacks for the class. Crap, you forgot to have an NSMutableData to hold the bytes as they're streaming in to your callback objects, better add that too. Then, when it's done, you'll need to do something with this data, and clean up.

You've probably done this a dozen times, shoving these methods in controller classes where they don't entirely belong. The code is the same except for your error handling and your completion data handling, but you retype it (or worse: copy and paste it) again and again. No more.

I've written JBContainedURLConnection as a class to wrap up all this work for you, simplifying the API to only the mere essentials—which are exactly what you'll need 99% of the time—along with some extra goodies to make your code even simpler.

There are two main ways to use this class: you can either use it with its own simplified delegate protocol, or use it with a Block object completion handler. You chose at initialization, and all the rest of the work is done for you. All you need to do is wait for completion (or error).


The delegate protocol is simple: you are required to implement exactly two methods, one for failure and one for success. Both callbacks pass to you the address of the resource you were trying to load, and the userInfo context dictionary passed in at initialization. If the connection failed, you'll also get an error pointer. If the connection completed successfully, you'll instead get an NSData pointer containing the resource you loaded. It's terrifically simple.

JBContainedURLConnectionCompletionHandler (Blocks based)

The simplified delegate protocol is nice, but it's still a lot of overhead in your code, as you've got to conform to the protocol, then make sure you've got both methods implemented in your implementation file. The Blocks-based route is much simpler.

All you do is call the initializer with a completion handler option, and implement the Block object right in line. This Block object is invoked on success or failure: if the error pointer is non-nil, this means there was an error, and you should act accordingly; otherwise the connection loaded your resource successfully and you can use the data. This route essentially combines the two delegate callbacks into one Block object execution, drastically simplifying your code.

Requirements and Future development

The class is pretty lightweight and simple, but it does have some requirements. It's iOS 5+ for the build environment as it uses Automatic Reference Counting for memory management. Lucky for you, ARC is backwards compatible down to iOS 4, so with little modification, this should run just fine on iOS 4 as well.

The class is very simple and suits 99% of my needs, replacing swaths of boilerplate code, simplifying my callbacks, and allowing me to perform asynchronous requests as elegantly as using a synchronous approach. With none of the downsides and all of the elegance. There may be some reasons to extend this class on your own, especially if you need to handle authentication, but it's still an excellent drop-in as is. If you do decide to extend the class, please fork it on Github and send me a pull request.

How to Push to a Remote Git Repository in Xcode 4

It isn't very easy to spot, as it's not in the Organizer.

After you've committed your changes in Xcode 4, select from the menubar File -> Source Control -> Push and then choose a remote repository. It's probably something like origin. This even works for Github repositories.

(via @tapi)

Super Smash Land Released. What do you get when you take Super Smash Bros. and demake it with GameBoy style graphics and effects? Super Smash Land is what you get. It's a free PC download. Worth it for the music and graphics alone.

Form and Format and Software Names

Here at Speed of Light, I like to pretend I'm a writer, and part of this job means I need to stick to a semi-stringent set of formatting rules for how my words are written, and how the copy is presented. While I'm not writing essays for an English teacher, and I'm not nearly as anal in my style as is say, The New York Times, I do try to adhere to the basics I've learned in years of school for grammatical and formatting style.

According to most style guides, the names of works of art such as books, films, songs, etc. must be emphasized in the text, usually by italics or “quotes” (the precise method varies among styles).

So, from now on I'm going to make an effort, when referring to pieces of software to italicize their names, as I believe they are similar to the other forms of intellectual work described above. I encourage you to do the same whenever possible in your writing.

Rudiments of Computer Science: Learning to Write Code Well

In the internships I've done at larger companies, and in the upper year Computer Science courses I've taken, I've noticed a disturbing trend: nobody can write code well. On one hand, it's a matter of practice and experience, but on the other, more grave hand, it's the effect of shyness and laziness, and an unwillingness to learn above what is taught in school or can be found by copying and pasting code from the web. Like any skill, it takes time and effort to develop. And while I certainly wasn't the best coder in my earlier years of school, and while I still have a tremendous amount to learn, there are some of the rudiments I wished had been taught to me years ago.

The compiler is smarter than you. Make friends with it.

This is not to say the computer is supreme and you, for one, should welcome your pedantic overlords, but you should accept its nature and work in harmony with it. Too often have I seen students slapping their computer, calling it stupid and wrong. You need to accept if the compiler is giving you errors, no matter how cryptic they may seem, it is correct and you are wrong. There is no shame in this, however, and petulance on your part will only slow you down.

So, when the compiler spews out an error message you don't understand, take this as an opportunity to learn something about how your language works. When you eventually track down the source of the error, you'll learn from the mistake, which will make you a better programmer, and you'll be far less likely to make the same mistake twice.

(Brief interlude about how to read compiler errors: If you're a particularly new programmer, you'll probably write all your code, save and then try to compile, resulting in what seems like a bazillion compile errors. Firstly, don't do this. Try to compile as often as possible, every time you finish some discrete unit of code, like a method. Secondly, and most importantly, when you get a ton of compile errors, it may seem daunting at first, but it's usually trivial to fix: start fixing the first error, save, re-compile, and keep going. You'll find by fixing the top ones first, most of the other errors disappear. Finally, if your compiler has an option to treat “Warnings as Errors”, you should enable this. Warnings are usually a sign you're doing something which may work. Programming fundamental: “*Could happen* should be treated as Will happen”, so you need to fix these warnings, too.)

Once you accept the compiler for what it is, your programming job becomes much simpler. Treat the compiler as a pedagogue and work with it. Remember, it's just a piece of software, it does not care how you feel.

Make effective use of Google and Documentation.

Taming the compiler is a great first step to writing better code, but eventually you'll master the syntax of your language and move into higher-level problems, where you'll need to use much more than just the basic syntax, instead using more features of the language and the libraries it provides. You need to learn this too. This means going beyond lecture slides and your textbook, and reaching up into the documentation provided by the language's vendor.

Most language installations come with some form of local documentation, in the form of Help documentation and usually Library references (I'll assume you're using an Object Oriented language, but the same applies to other language styles just as well). These documents provide references for all classes available to you as part of your language's standard libraries. Each class reference has written explanations of the Class, its purpose, and detailed descriptions of how to use each of its methods. If you're lucky, you'll also be provided with programming tutorials for certain problem domains (like “Collections Programming Guide”) and Sample Code for your perusal.

While so much documentation may at first seem overwhelming, in practice it's exactly what you need exactly when you need it. I don't recommend just opening up the help and reading every piece of documentation before you start, but I do recommend using a search engine and researching the classes you're using, so as to make the most of them, and to use them as they were designed. The Documentation is your best way to find out which methods are available for a class, which parameters they take, what they do, and what they return. If this documentation was provided with your language installation, chances are it's also available through Google.

Learning how to use your language libraries is going to make you a much better programmer.

Learn your environment.

I figured this one was a given, but I'm amazed time after time when programmers don't make the most of their compiler, code editor or IDE, and operating system. A startling number of programmers I know fear and loathe the command line, which paralyzes them from using some of the richer and helpful tools available to them.

(Brief interlude wherein I scold nerds for being afraid of their own computers: If you are a Computer Science student, or otherwise beginner programmer, you are expected to be at a certain level of comfort and proficiency with computers. This means you should know how to use a command line interface. You don't have to be an expert, but you shouldn't avoid it, either. You should be familiar with a Unixy system. OS X is a great one, and Linux works really well too. Your faculty's computer labs probably run Linux, so you have no excuse. But it goes deeper than just learning certain environments. I've encountered a large number of Computer Science students who seem to have a general fear of using computers, to the extent where menus aren't read, programs aren't understood, and Google isn't used. You didn't even try. This is how my non-nerd family treats computers. You need to do better.)

Integrated Development Environments, like Eclipse and Xcode are massive and overwhelming tools for the beginner. You'll probably start off writing your code in a basic text editor. I recommend moving to an IDE as soon as possible. They're big. They're scary. They're worth the pain. Once you become comfortable with your IDE, you'll be much more productive and learn quicker, as the IDE will provide things like Syntax Highlighting, Code Completion, and integrated and graphical debugging. These will help you find problems much more easily than had you been writing in Notepad.

IDEs are not the only way to write code, but they're an important one, and extremely useful. You won't regret learning how to use one.

Use source control.

Source Control is the most useful tool for your programming after your environment (if you're working in a good environment, source control is integrated), and the sad part is very few of my peers ever seem to use it, and schools rarely seem to teach it.

Source Control Management allows you to keep your code files in a managed way, so as you edit your files, you can make checkpoints or commits, and if you later botch your code, you can revert it back to a given checkpoint. It also allows you to create different copies of your code so you can change two separate pieces in safety net areas, without having them break each other.

Lets say you've got a project of files, and you're about to work on your next big task. Before you do any more work, you make sure you commit your changes. You can then create a separate branch of code to work on the next big task. As you're working on this task, you realize there was a bug in your previous task, so you can switch branches, fix the previous bug, and then resume working on your current task. When it's done, you merge the branches together, and now your code has both the old fix, and the newly completed task. Branches give your code an extra safety net so you can easily separate tasks.

SCM gives you other benefits, like having a backup of your code (as you can just go back to previous commits in your code if need be), and it allows you to keep your code on other machines. This also means you can collaborate with other coders, too. If you share a repository with other coders, each coder can push and pull commits from branches and repositories. This is great if you're working on group projects.

The real importance of Source Control, even for the solo programmer, is it gives you a safety net around your code. If you mess it up, you can always revert it and fix it.

There are many kinds of SCM tools available, but I'd recommend going with git. It's a little scary to use (but hopefully you're no longer afraid of using the command line, and you're not afraid to seek out answers, either), but it's also one of the most powerful. It's very popular, so there's a plethora of documentation and tutorials available on the web. There's also a little website called GitHub which hosts open source projects contained in git repositories. If you're looking for examples on how to code (or how not to code), learning to read open source software will take you a long way. And if you decide you'd like to practice writing code, contributing to open source software is an excellent exercise.

Learn how to debug.

Once you learn how to write code, read documentation and seek help, use your environment, and keep your code safe with source control, you'll need to learn how to properly debug your code as well.

Bugs are subtle mistakes in your code which can't be found with a compiler. You'll encounter them constantly, though hopefully not the same ones more than once.

Working with my peers, I'm again surprised at how few programmers know how to make the most of the debugging tools at their disposal. The extent of most debugging seems to end at print statements. And while printing and logging information to a console is helpful, if it's your only debugging tool, you're going to quickly find yourself bogged down.

Logging statements gunk up your code. They only reveal to you the state of whichever values you explicitly ask to be logged. They don't allow for conditional output. To really debug your app, you should learn how to use your IDE's graphical debugger and Breakpoints (of course, you could do this all from the command line, but I find graphical debuggers much nicer for visualizing the state of your app). You set breakpoints in your code, typically by clicking somewhere in your editor's gutter, and then Build + Debug your app.

When execution of your program code reaches the breakpoint's line, execution pauses and the debugger takes over. From here, you can now see the values of your objects and variables, which is incredibly helpful. You can also control the flow of your code, by telling the debugger to advance one line, or to jump in or out of a method. These primary functions are going to help you tremendously when fixing problems.

There are other helpful debugging features of breakpoints, like setting conditions for a breakpoint. You can make it so the debugger only kicks in if (x > 4) for example, which is helpful when a bug only happens under some condition.

Write an app.

Finally, and this one is simple: Practice! The best way to become a better programmer is to practice. Doing well on assignments is one thing, but it's not nearly as helpful as building your own app from start to finish. Or joining an Open Source project, finding your way around its codebase, and contributing to it.

These are all things I wish I'd known when I started programming. They are things which I've learned the hard way on my own, through painful mistakes. They are things I wish I could teach to all programming writing students, and they are things hopefully others can learn from.

Subscribing to "Speed of Light"

There has always been a way to subscribe to this website using its ATOM feed, which works just like an RSS feed. You'll see every article as it's published, along with the full text. I hope you'll still visit the website from time to time as there are other goodies you'll miss from the feed.

I realize RSS is not for everybody, so recently I added an additional way to subscribe, by following the @ospeedoflight account on Twitter. It's not automated, but you should see links to all the content I post here.

Either method of subscribing will help keep you up to date on everything I post here, and hopefully make it easier for you as well.

'Trucks and Motorcycles'. A great piece by Michael Mulvey on PCs as trucks, and Windows 8 as a truck with a moped bolted on the side.

Now I don't disagree with Gruber's core argument, again I disagree on the use of 'compromise'. If Apple's goal is to create the best tablet experience in the world, compromises can't be made, because compromising implies negotiating down from some ideal vision. If desktop-level applications aren't needed or appropriate for a tablet, then not supporting them is not compromising.

Giving a motorcycle two wheels instead of four doesn't mean you're compromising. What you're doing is giving a motorcycle the thing that makes it great.

The One Where I, Jason Brennan, Offer A Hearty Critique Of A Sample Of Block Object Programming Code Written By Ash Furrow. Earlier today, Ash published an article about Object Oriented Programming dogmatism surrounding naming conventions for Class names, and he included some of his code as an argument. I'm not going to discuss his argument but rather critique his coding style.

I'll start with the main nugget of his code, his asynchronous loading method, and then fix it up to how I think it should appear. In the end, I'll show what a cleaner invocation of that method would look like, and why I think it's better.

We'll start with the selector:


At first glance, it looks simple enough for what it's doing. I can infer it takes a priority, which is probably a typedef of an NSInteger, as enum values. The next two parameters are Block object types, and the final parameter is the identifier of the model object requested. For anyone who has used Block objects inline with a method invocation like this, you know how hairy they can look. In this case, we'll have two separate inline Block objects, and finally an identifier lost after them. We can do much better:


From a purely linguistic perspective, the selector name has been simplified to contain a single with (think “Give me a model with this identifier, priority, callback”). This is a minor tweak which just melds better with my personal writing style.

The more important change is the reordering and reduction of the parameters. The most important parameter to the method is the identifier of the model we're trying to fetch. It belongs first. Previously, it was last, and was lost at the end of an invocation after a mess of nested Block objects. The priority parameter remains untouched.

The callback and errorBlock arguments have been sublated down to a single parameter. Previously, the two Block types looked like this:

typedef void (^ModelFetchCallback)(Model *model);
typedef void (^ErrorBlock)(NSString *userFacingMessage);

As a single type they look like this:

typedef void (^ModelFetchCallback)(Model *model, NSString *userFacingMessage);

Instead of having two, there's just one. Upon execution of the Block object, the consuming API would then check for error (i.e. if (nil == userFacingMessage)), instead of having to write two separate Blocks inline. There is the case for doing it Ash's original way: if you've got to handle errors all over the place, it might make sense to have that in its own Block type, but I find it makes more sense to handle the error as just a secondary case to any success. My way, you'll have one Block type for every kind of asynchronous load, where the other way will have one Block type for every kind of asynchronous load, plus one for the error type.

Looking at the method body, I would leave it mostly the way it is, with a few minor tweaks.

dispatch_queue_t queue = (priority == ModelRequestHighPriority ? highPriorityQueue : lowPriorityQueue);

It appears to my very uneducated knowledge of the situation, this priority might just as well be a simple BOOL flag, unless there is a possibility for a medium priority (if there is, it's not being used here). In such a case, I'd call the argument highPriority.

For the rest of the body, all that needs updating is making it correspond to the updated selector signature, particularly with executing the passed in Block object:

if (!model)
        dispatch_async(dispatch_get_main_queue(), ^(void) {
            fetchCallback(nil, NSLocalizedString(@"Couldn't contact the server :(", @"Error message when no model is returned from API") );
        dispatch_async(dispatch_get_main_queue(), ^(void) {
            fetchCallback(model, nil);

After these tweaks, the method invocation becomes much simpler. The identifier of the object we're fetching is now up-front and no longer buried. Our priority stays the same, and our callback for success and failure are more logically grouped together as one.

[sharedInstance fetchModelDetailsWithIdentifier:identifier
                                       callback: ^(Model *model, NSString *userFacingMessage) {
    if (nil != userFacingMessage) {
        NSLog(@"Couldn't load images :( \n %@", userFacingMessage);
        [SVProgressHUD dismissWithError:userFacingMessage];
        return; // bail so we don't need to nest the success case

    // update UI

The thing about coding styles is the same about clothing styles: there are rudiments which work universally, but everyone has their own distinct taste. I think these suggestions help better group the fundamentals of using Block objects, but of course they're open to interpretation and improvement according to one's own taste.

Mike Ash on Objective-C Automatic Reference Counting. Mike Ash:

ARC occupies a middle ground between garbage collection and manual memory management. Like garbage collection, ARC frees the programmer from writing retain/release/autorelease calls. Unlike garbage collection, however, ARC does not deal with retain cycles. Two objects with strong references to each other will never be collected under ARC, even if nothing else refers to them. Because of this, while ARC frees the programmer from dealing with most memory management issues, the programmer still has to avoid or manually break cycles of strong references in the object graph.

Every iOS and Mac Cocoa programmer should read this. I fully expect Apple will require applications be compiled with ARC within 1 year.

How to Fix "mkmf.rb can't find header files for ruby" on Mac OS X Lion

This morning I found myself with the following infuriating error while trying to install the rmagick ruby gem:

Building native extensions. This could take a while… ERROR: Error installing rmagick: ERROR: Failed to build gem native extension.

/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb

mkmf.rb can't find header files for ruby at /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/ruby.h

In fact, I was getting this error for every gem I tried to install which needed to be built “natively”.

Here's how to fix it:

  1. First, make sure you have ImageMagick installed. The easiest way to do this is with Homebrew (this step is only necessary if you're trying to install the rmagick gem. If you're trying to install another gem, you can skip this step.

  2. Then, make sure you've got Xcode 4.1 (or higher) installed from the Mac App Store. It's not enough to simply download it, you need to run the installer found in /Applications/

  3. Then do a sudo gem install rmagick (or whichever gem you were trying).

If you still get the error (as did I) then you probably have your system messed up a bit. I think what caused it for me was having both Xcode 4.1 and Xcode 4.2 (beta) installed. To fix this do the following:

  1. Uninstall Xcode 4.2 with sudo /$DEV_TOOLS_4_2/Library/uninstall-devtools --all where $DEV_TOOLS_4_2 is wherever you installed the Xcode 4.2 beta.

  2. Then do the same for Xcode 4.1 sudo /Developer/Library/uninstall-devtools --all. You might want to restart your system now.

  3. Re-install Xcode 4.1 from the App Store installer.

  4. Install the gem again.

That fixed it for me. I haven't tried installing Xcode 4.2 beta again yet, but I'm guessing this was just some messiness caused by having two Xcode installs. When Xcode 4.2 goes GM (hopefully soon) this should no longer be a problem.

President Obama on Steve Jobs. President Obama:

Steve was among the greatest of American innovators - brave enough to think differently, bold enough to believe he could change the world, and talented enough to do it.

Walt Mossberg on Steve Jobs. It's hard to pull out just one quote from this lovely story, but here goes.

For our fifth D conference, both Steve and his longtime rival, the brilliant Bill Gates, surprisingly agreed to a joint appearance, their first extended onstage joint interview ever. But it almost got derailed

Earlier in the day, before Gates arrived, I did a solo onstage interview with Jobs, and asked him what it was like to be a major Windows developer, since Apple’s iTunes program was by then installed on hundreds of millions of Windows PC. He quipped: “It’s like giving a glass of ice water to someone in Hell.” When Gates later arrived and heard about the comment, he was, naturally, enraged, because my partner Kara Swisher and I had assured both men that we hoped to keep the joint session on a high plane.

In a pre-interview meeting, Gates said to Jobs “so I guess I’m the representative from Hell.” Jobs merely handed Gates a cold bottle of water he was carrying. The tension was broken, and the interview was a triumph, with both men acting like statesmen. When it was over, the audience rose in a standing ovation, some of them in tears.


Steve Jobs died today.

Originally I had written that as “Steve Jobs passed today”, but I know, as he knew, there is nothing left to pass into. The idea of Steve will live on, as will the impact has made on everybody's life.

I'm in a weird mood about his death (which I realize is kind of selfish). Death affects us all differently, but we're united by the mere act of being affected by death in the first place. At first thought, it's a sad thing. Even though I didn't know him, it's sad. But on second thought, it's beautiful in a way.

It's sad because he was so young, so full of life, and it was so swiftly stripped from him. But it's beautiful because now we are given a focal point for his life. Today is a culmination of everything great his life has brought the world. It's a shame there will be no more to it, but through his actions resonate something greater. This is the way to live on after death.

Steve Knew What Mattered Most. Steve Jobs was a human.

“Steve made choices,” Dr. Ornish said. “I once asked him if he was glad that he had kids, and he said, ‘It’s 10,000 times better than anything I’ve ever done.’ ”

How to make Calendars work with iCloud, iCal, and Mac OS X 10.7.2

I had some troubles after the update to Mac OS X 10.7.2 with syncing iCloud Calendars with iCal. The main problem was my iCloud calendars weren't showing up in iCal. Here's how to fix it.

  1. Open iCal, go to iCal -> Preferences -> Accounts, and turn off any other accounts which might be syncing with this Mac (in my case, I was synching with Gmail). This may not be necessary, but I don't want a bunch of different calendars messing things up.

  2. Quit iCal.

  3. Open System Preferences -> Mail, Contacts, and Calendars.

  4. Select your iCloud account.

  5. Uncheck the “Calendars” option for your iCloud account, and read the alert it shows you. Confirm it.

  6. Then re-check the “Calendars” option for your iCloud account. This will throw up a different alert. Read it and confirm again.

  7. Now open iCal again, and you should have your iCloud calendars, and only those calendars.

Keeping iMessage in sync between iPhone and iPad. In summary, make your “Receive At” be your Apple ID email address, on all your devices, and make your “Caller ID” match that email address. When you want someone to iMessage you, try and get them to use that email address. Then it should work.

I have provided my Apple ID that I used above to all my contacts who use iOS5 devices and I have asked them to use this to iMessage me. This is the only way that an iMessage convo initiated by one of your contacts to you will stay in sync across all your iOS5 devices. If they use your cell number, then the iMessage convo will only show up on your iPhone. Even though the Apple ID you used to activate iMessage in the Apple ecosystem links your devices, a reply from your iPhone to an iMessage sent only to your cell phone will not “push” the reply to your non-iPhone device. Long story short, give all your iOS5 contacts the e-mail address you entered above in the “Receive At” setting and all your convos will stay in sync.

You can iMessage me at my email address.

On "Shit Work". A great post from Zach Holman about mind-numbing minutia calibre work thrust on users, and why nobody uses the involved features:

Take a look at Twitter Lists. The idea behind Twitter Lists was that users would carefully cultivate lists on Twitter of different accounts they’re following (or not following). These could be divided into lists like Family, Friends, Coworkers, People I Find Mildly Attractive, People To Murder, People I Find Mildly Attractive And Want To Murder, and so on.

The problem is that, anecdotally, no one seems to use Lists. Twitter is filled with users who have carefully made a few lists, and then promptly forgot about them after they realized their clients don’t make it as easy to read List tweets as it is to read tweets from people you follow.

This is why Google+ is too much work.

This is why Facebook is too much work.

This is why arranging files into folders is too much work.

This is why moving mail into labels in GMail is too much work.

This is why keeping your inbox empty is too much work.

This is why nobody does those things.

Changing the World. This week, my friend Ash Furrow posted the article to which I'm linking, talking about changing the world:

I wrote a Solar System simulation as a class project and later put it up on the App Store. Now, there are elementary schools in Australia using it to teach kids about our Solar System. […]

Maybe only in small ways, I feel like I'm changing the world, but that feeling is why I get up in the morning. I feel really lucky that my profession lets me doing the kind of really cool things I get to do.

Ash refers to a talk I gave at UNB in the Fall of 2009, where I ended with a plea to my fellow Computer Science students:

If you're just in this for the money, get out, because you're going to ruin it for everyone else. We're here to change the world.

This idea is, you should be trying to make a contribution to the world and make it a better place. Money should be a means to this end.

Changing the world can sound like an onerous task, but it shouldn't be interpreted as such. Not everyone is going to lead a revolution or enlighten the masses, but if you can at least alleviate a niche, then you've made the world a better place. You've changed the world, no matter how small that world may be.

There are times in my life when I question what I am doing, when I question the impact of the things I make or the words I write and I doubt if any of them will ever change the world. But judging from Ash's piece, I feel as though I have.

Mobile Metamorphasis: The Fall 2009 UNB Talk. In 2009, I gave a talk to a handful of people at UNB's Computer Science department, focusing on my iPhone application “Keener”, and mobile software in general. Since mentioning it previously, I thought it deserved a linking to of its own.

Looking back, I'm proud of the slides and I remember the talk pretty well. The first two acts are about my application “Keener”, while the second two acts focused on Mobile development and how I thought it would enable our generation to change the world in new ways. And it looks like I was right.

Finally, I want to single out these two slides, which have since become part of my Maxims of Software Development:

  1. User’s focus is sacred.
  2. User’s intention is sacred.

"Pub Rules" by Brent Simmons. Brent Simmons has been on a roll lately about producing a more readable Web, and here he's providing guidelines for the ideal readable website. These agree perfectly with what I had in mind when I designed Speed of Light.

I could pull out some examples, but then I'd ruin Brent's post. Read the whole thing, because I've used every single one of these principles back when I designed this site. I'm glad Brent thinks likewise.

"Don't Be A Free User". maciej on paying for services you'd like to see stick around:

These projects are all very different, but the dynamic is the same. Someone builds a cool, free product, it gets popular, and that popularity attracts a buyer. The new owner shuts the product down and the founders issue a glowing press release about how excited they are about synergies going forward. They are never heard from again.

I've been thinking about this a lot lately.

"Ten Things Everyone Should Know About Time". Incredible and though-provoking points on what is commonly believed to be our current understanding of time.

3. Everyone experiences time differently. This is true at the level of both physics and biology. Within physics, we used to have Sir Isaac Newton’s view of time, which was universal and shared by everyone. But then Einstein came along and explained that how much time elapses for a person depends on how they travel through space (especially near the speed of light) as well as the gravitational field (especially if its near a black hole). From a biological or psychological perspective, the time measured by atomic clocks isn’t as important as the time measured by our internal rhythms and the accumulation of memories. That happens differently depending on who we are and what we are experiencing; there’s a real sense in which time moves more quickly when we’re older.

(via kottke)

Brent Simmons "On the Tab Labels in the New Twitter App for iPhone". I'm glad I'm not the only one who loathes the vapid loaded marketing words:

Connect and Discover are the ones I like least, since they sound as if they weren’t decided upon by designers but by a murder of marketing executives perched around a big table. Both are too-abstract Latin words with the blood sucked out of them.

I would have gone with Mentions and Find. Those words may not encompass everything you can do on those tabs, but they’re close enough, and they mean something — while connect and discover mean almost nothing. Those require translation into the concrete. (Why Find instead of Search? Because Find implies success, while Search is an action that may fail.)

Best way to measure a mass of marketing executives, if you ask me.

"Why apps are not the future". Dave Winer:

Visualize each of the apps they want you to use on your iPad or iPhone as a silo. A tall vertical building. It might feel very large on the inside, but nothing goes in or out that isn't well-controlled by the people who created the app. That sucks!

The great thing about the web is linking. I don't care how ugly it looks and how pretty your app is, if I can't link in and out of your world, it's not even close to a replacement for the web. It would be as silly as saying that you don't need oceans because you have a bathtub. How nice your bathtub is. Try building a continent around it if you want to get my point.

Why is it for apps to “win”, the web has to “lose”? I see no problem with both apps and the web coexisting. They solve different problems in different ways. They're even quite compatible (disable the internet on your phone and see how many of your apps are still useable).

It's all software and it's all here to stay.

On iOS and Android

Android is like a supermarket: limitless choices and combinations, but you've got to cook it yourself.

iOS is like a restaurant: most of the choices are made for you, but the meal is effortless.

"Redesigning Mac App Updates". Lennart Ziburski on redesigning the update workflow of a Mac application:

When you open up an app that has an update available, you just get a tiny notification at the top right corner instead of a big window you need to dismiss. You can act on this notification whenever you want – and it doesn’t get in the way when you are in a hurry.

When you click on this notification, a popover appears with additional information like the changes that were made in the update or the version number.

I'm still torn about updating software, but this makes things lovelier.

"The E-Reader, as we know it, is doomed". Matt Alexander, guest posting on The Loop about the pending demise of today's e-readers:

The most obvious answer lies in the e-ink display used in all e-readers. For me, I see the e-ink display as an impressive technological leap. Just a few years ago, the only interface comparable to paper was paper itself. Now we have a paper-like technology for displaying words and images. Having said that, while it’s a great leap and all, it is clear that the technology is inherently stunted.

Especially if iPad doubles its pixel density.

Distance: a new Quarterly Pocket Book about Design. From Nick Disabato:

Distance is a pocket book with three essays on design every three months.

We believe that the best ideas come from people who work in the trenches every day, who design because they can’t help it. Practitioners are able to write serious criticism that can change our field for the better.

Distance is our attempt at bringing good people together to write long, kickass essays. We want to preserve and share great ideas. We want to inspire the crap out of you.

This is incredibly exciting and I can't wait to see how it turns out. You can support it on Kickstarter.

Learning to Code and Inventing the World. Daniel Jalkut on Code Year (emphasis mine):

What it boils down to is that programming is both incredibly simple and impossibly hard, like so many important things in life.

There was a time when nobody knew how to write literary prose. The geniuses who invented it shared their special tool with a few friends, and they relished in their private, elite communications.

I don't think this was really the point of Daniel's article, but the emphasized line stuck out at me. The implication being, there was also a time when nobody knew how to write programs.

The implication being, somebody had to make it all up. We went from not having programming to having programming. All of the techniques and languages and patterns and apps and APIs never existed even 60 years ago. It was all invented. And likewise, the things my contemporaries and I are making today might get looked upon the same way down the road.

We're blazing our own trails here.

The Kind of Motivation I Need for School. In case school is burning you out, read the linked page. A taste:

I suffered through half a semester of differential equations before my pride let me go to R. for help. And sure enough, he took my textbook for a night to review the material (he couldn't remember it all from third grade), and then he walked me through my difficulties and coached me. I ended up pulling a B+ at the end of a semester and avoiding that train wreck. The thing is, nothing he taught me involved raw brainpower. The more I learned the more I realized that the bulk of his intelligence and his performance just came from study and practice, and that the had amassed a large artillery of intellectual and mathematical tools that he had learned and trained to call upon. He showed me some of those tools, but what I really ended up learning was how to go about finding, building, and refining my own set of cognitive tools. I admired R., and I looked up to him, and while I doubt I will ever compete with his genius, I recognize that it's because of a relative lack of my conviction and an excess of his, not some accident of genetics.

Mike Ash on Objective-C Blocks vs C++ Lambdas. This came up at work today and this is a nice overview of both.

Blocks are perhaps the most significant new language feature introduced by Apple in years, and I've written a lot about them before. The new C++ standard, C++0x, introduces lambdas, a similar feature. Today, I want to discuss the two features and how they are alike and how they differ, a topic suggested by David Dunham.

Speed of Light Housekeeping January 2012

I don't really like to do “Year-End” review articles or anything like that, but I've made a few little tweaks to the site and I thought now would be as good a time as any to review them, along with the status of the website. You are, after all, my dear readers and if anyone should care to know, I expect it would be you.

First of all, 2011 proved to be a great year for the website. I relaunched it in February with a great new design, and I've had nothing but positive feedback ever since. I tweaked the style of the site slightly in the Fall, and have made it fully responsive to different browser sizes. The site should look great no matter how big or small your browser.

Since February's relaunch, the site has had just shy of 30 000 page views! I can't express how pleased that makes me. The site also now has over 60 feed subscribers in Google Reader alone (I can't track feed subscribers outside of Reader, unfortunately, but if you subscribe this way, drop me a line). It's really nice to have people who like what you do. I feel very encouraged by each and every one of you.

Finally, today I've made a few little tweaks to the site, which again, you probably won't notice. I've added a visible date to all the permalink posts so you can always tell when an article or linked item was posted (I've actually always included the <time> in the markup, but now it's also visible to the reader). Additionally, I've changed the style of my articles slightly so they can be more easily told apart from link posts.

Thanks again for reading!


Website Comments. The latest craze in weblogs seems to be disabling comments, and now even my dear friend Ash is following suit. This is a great thing for the web, although I’m a little dismayed it took a trend for this to be on people’s minds. The Ashes ponder:

I was discussing with my wife how, though Matt Gemmell only recently disabled comments, my friend and fellow tech blogger Jason Brennan has never had comments enabled in the first place. I insinuated to her that Jason was just lazy, since he wrote his own blogging platform, and she replied with “well, yeah, you wouldn’t either, would you?”

That is part of it. Writing a website without comments in significantly simpler than writing one with. But it is also the case a website without comments is what I wanted since Day One. They were never on the table. This wasn’t even an inspiration from The Great One (or The Greatest One).

My goal has always been to foster and participate in a community of websites linking back and forth. A web if you will. It’s always been the best way to get a real discussion going. Although sometimes I feel like I’m the only one talking.

"Understanding JavaScript OOP". A hugely comprehensive guide explaining JavaScript's object model. I haven't read it all, but I'm linking to it for future reference. Looks really handy.

Pro Tip for Office Documents on the Mac

If you're like me, your professors don't know how to distribute PDFs, only Word Documents, which are a pain in the ass on the Mac. By default, they'll open in Pages, which always pops up a “Review conversion warnings” sheet (ten points to anyone who knows how to disable that!).

Instead, you can just make the document files open in Preview.app:

  1. Right click on the file in the Finder and pick “Get Info”.
  2. Click the “Open With” drop down and pick “Preview”
  3. Click the “Change All” button below.
  4. You'll have to repeat this with both *.doc and *.docx files.

Now your Word documents open in Preview. Nice.

Reading Time. Ash Furrow contemplates the consequences of the abundance of reading material we find ourselves swimming in on the internet:

The Internet has made me really bad at reading. There's always such an abundance of things I could read that I need to selectively filter what I choose to (because my time, like everyone's, is finite and valuable). The unfortunate reality is that I find myself reading more articles, but only skimming them.

Those of us who spend grand periods of time on the internet have this common problem. We've got things like Hacker News and reddit, and Twitter, and RSS feeds. We've got a constant stream of content. And there just isn't enough time for it all.

So what do we do? The best way I've found to solve it is just a complete dedication to listening to the right voices and ignoring everything else. It takes practice. It takes delegating the filtering to those you trust most. That's part of the reason why I link to things on this website. I want to help readers like me to find interesting things. I've already read them and I want you to read them too.

I've had a half-finished article sitting in my drafts for nearly a year now on this topic. Maybe I'll dust it off soon.

Notch on Coding Skill. “Minecraft”'s Marcus Notch on his own coding skill:

I used to think I was an awesome programmer. One of the best. After I made a game in the first programming lesson in school, I got told to don’t bother showing up for the rest. I was the one who taught all my friends what big O notation is and how it’s useful, or why hashmaps can have an effective constant speed if used right.

When someone told me I was a bad programmer, I got upset. My identity was based on being The Best Programmer, and being accused of not being one was a huge insult. Of COURSE I wrote bad code sometimes, but that was just sloppyness [sic] or part of some grand scheme, or some other weak excuse.

Not that I've ever really considered myself an excellent programmer, but it's always a good idea to take stock of your skill. No matter how great you are, you can always be better. Balance, as usual, is the key.

Controlling iPad Games with a NES Controller. It's pretty simple and elegant how it works. Basically any iCade-compatible game can work.

iCade sends key strokes when a button is pressed and once again when it’s released. The keys are documented for developers. I launched iMAME again with the USB keyboard plugged in to confirm and as expected the keys worked to control the games.

The next part was pretty straight forward. Make a USB keyboard with an Arduino (see previous post on how this was done) but instead of an actual keyboard matrix, use a game pad. I chose to use my trusty original NES game pad since they’re so easy to interface with.

Discerning Good From Great. A great piece by Shawn Blanc about how to decide which ideas to pursue. I've been thinking about this lately, and this is the part which really spoke to me:

They say good is the enemy of great, and I agree. Some ideas, as good as they are, should be left alone so that when a great idea comes along there is a place for it. Discerning the difference between a good idea and a great one takes practice and the support of trusted friends and advisors.

Designing an App Means Knowing How It Will Work

Ash Furrow recently wrote an article about “How to Design iOS Apps”, wherein he discusses the difference between using Apple's stock UI components or building your own. At least, on the surface, that's what the article seems to be about. But after actually reading the article, I'm left with the feeling it was more about “How to make your iOS Apps Look”. Consider the following:

[Adhering to Apple's Look-and-Feel] is the best option for most people. Apple provides amazing libraries of user interface elements that can be combined to make great apps. This doesn't mean that you can't be innovative and are forced to make cookie-cutter apps; only that you should focus on what the user is familiar with and expects.

This option has many, many advantages. The user will feel at home and know instantly how to use your app. If you're a developer, you're probably bad at understanding the user's expectations, so using UIKit for everything means Apple has done all the user interface design for you (hooray!). Using UIKit is always way easier, so you can spend you're time doing things that make your app functional and unique.

(Emphasis mine. Keep that part in mind) The design of the application is really a focus on how it's meant to be used, and if you, a developer, are bad at understanding the user's expectation with regards to how the application will be used, you aren't doing your job right.

Shifting gears, Ash moves on to the other option of “Roll Your Own”.

This alternative option is best suited for development teams with graphic designer talent. (No, developers who have a pirated version of photoshop and get confused about how the Pen tool works do not count as graphic designers.) This kind of app typically involves making something completely new and innovative or copying existing, but non-standard, interfaces.

Ignoring the fact knowing “Photoshop” does not actually make you a graphic designer any more than knowing “hammer” makes you a carpenter, the visual design of an application is not the same as how the application is actually meant to be interacted with. Great graphic design is supposed to communicate how the application is to be used, but it does not define the usability itself. But then:

It should involve designers and developers working together to create an innovative and consistent user experience. Using open-source projects should be a means to an end; if you start with some neat menu demo project from github as the basis for your app, it's probably going to suck. Instead, start with an idea of what you want and leverage existing solutions to achieve this goal.

It should involve those same developers who are “probably bad at understanding the user's expectations”? And you shouldn't use code you find on GitHub, but you should post it there so you'll become internet famous?

The rest of the article deals with the implementation of the design by you, the programmer, and is for the most part accurate, but still there are a few bits which detract from it.

A Lightning Round of minor inaccuracies:

Wherever possible, avoid using transparent .png's to achieve some effect. The system can hardware accelerate your app's graphics a lot easier if they're using a CAGradientLayer and not some UIImageView instance.

Actually, according to some of the WWDC videos from 2011, in many benchmarks, drawing with UIImageView ended up being the fastest and most efficient way to draw graphics to the screen. More so than using a CALayer itself.

Under no circumstances should you subclass UINavigationController. Ever.

This is in the class reference for UINavigationController and has been since iPhone OS 2.0. If developers can't read the docs, then all hope is lost.

Developing a usable interface is one of the most important problems we developers face, and it involves so much more than just our coding know-how. It involves input developers and graphic designers and users too. The design of an application encompasses all about how it is used, and it can't just be slathered on in the last steps. It is not a coat of paint.

And even though Apple provides the excellent UIKit for us to make use of, there's no way one framework could provide the interface paradigms for all uses to come. User interaction design is not a solved problem—it never will be–it's a constantly evolving target, and we app makers are here to always be advancing the state of the art.

Everything is a Remix. This came up in conversation tonight, but the gist of it is, every bit of our culture is not necessarily original, but that doesn't preclude it from being new.

This is an excellent video series demonstrating those principles in our media.

Apple, America and a Squeezed Middle Class. The New York Times on the trend of large electronics companies and how outsourcing of manufacturing is eroding America's middle class.

Jobs and Obama:

But as Steven P. Jobs of Apple spoke, President Obama interrupted with an inquiry of his own: what would it take to make iPhones in the United States?

Not long ago, Apple boasted that its products were made in America. Today, few are. Almost all of the 70 million iPhones, 30 million iPads and 59 million other products Apple sold last year were manufactured overseas.

Why can’t that work come home? Mr. Obama asked.

Mr. Jobs’s reply was unambiguous. “Those jobs aren’t coming back,” he said, according to another dinner guest.

There seems to be a general sense of entitlement, that these sorts of jobs are owed to the American population, but I don't think they really are. This erosion is emblematic of the erosion of our international borders in a sense. The whole world is up for grabs and the roles of country boundaries are diminished.

“We sell iPhones in over a hundred countries,” a current Apple executive said. “We don’t have an obligation to solve America’s problems. Our only obligation is making the best product possible.”

Technology companies, like Apple, are getting this.

Gum: A Better Command-Line Interface for git. Git is a powerful tool, but the command line interface is downright hostile. This is a design to build a better interface.

The command git reset is a good example. I use git reset to undo things. But, its called “reset” and has a bunch of strange parameters which have to do with making writes to the git index. In order to undo changes properly, the reset command requires the user to have a clear understanding of the guts of the git index. This should not be the case for a common command, like undo.

# Why not replace things like: git reset HEAD -- file with:
> git unstage

Don't miss the hurd of greybeards defending git's current interface on Hacker News.

Sequel to the "Vader Kid" VW Ad From Last Year: The Dog Strikes Back. Remember this VW ad I linked to earlier in the year? Well there's a semi-related sequel to it. Be sure to watch it all the way through.

(Yes, I'm linking to a Mashable page instead of directly to YouTube because the new YouTube design makes every browser stall even if you don't have Flash and I don't want you to go through that. For some reason, the excellent YouTube5 extension no longer seems to work on YouTube itself, only embeds).

Too Young to Fail. An interesting look at some funded youngsters trying to change things.

Deming, only 17, had just been chosen by Silicon Valley billionaire Peter Thiel for a high-profile experiment: Put $100,000 apiece in the hands of 24 entrepreneurial teenagers and give them free rein to pursue innovative ideas.

The condition? Deming had to leave her studies and classmates, and vow to stay out of college during the two-year fellowship.

Thiel, who is PayPal's co-founder and holder of two Stanford University degrees, says higher education today is in a “crazy bubble” that, like a bad mortgage, saddles students with tuition debt often for little in return. A vocal libertarian, Thiel, 44, takes the view that a college degree can be harmful to innovators because of the conservative, career-driven mindset it imparts.

“Youth have just as much intelligence and talent as older people,” says James O'Neill, head of the Thiel Foundation and managing director at Thiel's investment fund, Clarium Capital. “They also haven't been beaten down into submission by operating within an institution for a long time.”

Interface Language: Tapping, Clicking and Buttons

One thing I've noticed in recent years is a handful of terms being used to describe interaction with a button in a graphical user interface.

Historically, we've used the term “Click”, as in “the user clicks the button”. This has made sense, because the traditional method of interacting with a button on screen has been with a mouse, and the main interaction of a mouse is to press it until its own button clicks, which also emits a clicking sound. As trackpads in notebook computers have become more prevalent, this clicking metaphor has broken down slightly.

Continuing the erosion of the button-click metaphor, direct manipulation touch screen devices (like iPhones) remove any physical mechanism entirely, leaving only a slab of glass between the user and the interface. Instead of clicking a mouse, the user taps their finger directly on the device and no sound is emitted. These are often referred to as “Taps”.

So now we have two competing terms for essentially the same action: Clicking and Tapping. Unfortunately, neither of these terms is fully applicable 100 percent of the time, and both suffer from the same problem: they describe more of the mechanism of the interaction than the interaction itself. When I say I'm clicking a button with a mouse, what I'm really saying is I'm clicking a mouse which invokes the button.

Instead, I've been using the term “Press” whenever I have to discuss an interaction with a button in a user interface. Not only does “Press” apply equally well whether a mouse, trackpad, or touch screen is used, it also relates better to what I'm doing with the interface. Pressing a button is so much more direct than clicking a mouse, which invokes the button.

The language we use to describe our interfaces can be subtle, but it's important to be precise. It can mean the difference between an indirect interaction with hardware, to an interaction with fewer mental barriers.

Nick Bradbury: Digital Ownership and the Path to Privacy. Nick Bradbury:

Those questions reflect our inability to value non-physical things. We as customers look at digital goods as less worthy of monetary value, and companies look at customer data as less worthy of privacy. In both cases, we de-value things that we can't touch.

Reflect on this next time you consider a $0.99 piece of software “too expensive”.

Beck recording with Dwight Yoakam. Two artists I totally love are working together. This could be better than peanut butter and chocolate.

Beck is reportedly working on a comeback album by country singer Dwight Yoakam. Yoakam, who sold more than 25m records in the 80s and 90s, is plotting a major-label return with the help of a couple of indie rockers.

Motorola's Upgrade Plans for Android 4.0. Dante D'Orazio on The Verge:

Unfortunately, it looks like there's going to be quite a wait ahead, as most of the company's phones still have a long way to go before they get updated: only the Xoom Family Edition, Atrix 2, Atrix 4G, Photon 4G, Xoom 2, Xoom 2 Media Edition, and the international variants of the RAZR, Xyboard 8.2 and Xyboard 10.1 have estimated release dates, and all of the updates are slated for Q3 of this year (other than the Xoom Family Edition and the RAZR, which are given a Q2 estimate).

This seems pretty common in the Android world, and it makes it seem like there's something inherently wrong about Android which makes supporting new software versions a nightmare for device manufacturers.

But I'd be willing to bet there's nothing wrong with Android at all. I'd be willing to bet the device manufacturers do a sloppy job because they don't care. The general attitude for Android devices seems to be “Ship and Dump”, because they'd much rather their customers just buy a new device.

I don't care what software platform you pick, that's a crappy way to make products.

Where Have Apple's Headlines Gone?. Ken Segall on the lack of copy in Apple's billboard ads:

Driving around LA with colleagues recently, we were greeted by iPad billboards just about everywhere we went. All shared the same clever headline: “iPad 2.”

That got my merry band wondering: when was the last time an Apple billboard or poster actually had a headline. (At least a smart headline in the Apple tradition.)

You Are Not Ruthless Enough. Fruit-company employee Chris Parker on having discipline while writing your code:

Every time you throw in something that seems to work, find out why it works. If you don’t understand why it works then the code is just magic and you’re letting yourself off the hook. That code will become enshrined in your version control system for later generations (read: you, three months from now) to discover, wonder at, puzzle over, and leave it because no one understands what it does.

Being ruthless to your objects means having clear bright lines separating responsibilities in your code. Where does that data download activity belong? It’s definitely not a view. It’s not the model but it gets things for the model. It’s sort of a controller, mediating between the network object on the far side and the model on the near side, but it’s not part of a controller. It’s its own object and it deserves its own interface. Put it in its own box. Keep it there. Don’t let it out except through the clean interfaces you write. Don’t let anything else in either.

As a frameworks developer I tend to look at this problem from the frameworks perspective. Part of being ruthless to my objects is writing real API every time I encounter one of these problems. Objects become testable in isolation. Complex behaviors arise from combining simpler objects in controlled ways through API. The key letter in the acronym is the I - the interface. It’s the clear bright line between the classes. It’s how you keep objects in their respective boxes.

Terrific advice. Don't let yourself slip. Always be designing as you're writing.

This One's For Your Mother. Episode Nine of Canada's favourite podcast, dashdashForce, featuring guest appearances by yours truly and Paddy O'Brien is available now for download.

In this episode we discuss Mountain Lion, Great Apes, and your mother. And iOS, Mac, and data-driven development.

TV Is Broken. Patrick and Beatrix Rhone on everything that is wrong about TV:

I turn to that, Beatrix approves, and we watch. Then, a few minutes later, a commercial comes on. The volume difference is jarring to say the least. I would safely guess it is fifty percent louder than the show. I hurriedly reach for the remote and turn it down…

“Why did you turn the movie off, Daddy?”, Beatrix worriedly asks, as if she has done something wrong and is being punished by having her entertainment interrupted. She thinks that’s what I was doing by rushing for the remote.

“I didn’t turn it off, honey. This is just a commercial. I was turning the volume down because it was so loud. Shrek will come back on in a few minutes” I say.

“Did it break?”, she asks. It does sometimes happen at home that Flash or Silverlight implode, interrupt her show, and I have to fix it.

“No. It’s just a commercial.”

“What’s a commercial?”, she asks.

It's not just children who don't get this. It's people trying to think rationally. TV, as it exists today, is so immensely antiquated. It feels like I'm turning on the 80s every time I'm around one of these noisy boxes.

Ditching Google: GMail to FastMail. Ash is trying to wrest himself from Google's grasp, and he's starting with Gmail:

I'm beginning a switch from GMail to my own hosted IMAP service at FastMail.fm. This represents a larger shift in my online presence that began over a year ago. It started with a move away from Blogger to my own hosted WordPress blog, and is representative of my desire to have more control over my data. “Control” isn't the right word - I am becoming more selective over who I choose to allow access to my data.

For anyone else considering a Google Divorce, be sure to also check out Gabe's Journey on MacDrifter. He's posted articles describing the replacements he's found for just about every major Google Service.

On the Implausibility of the Death Star's Trash Compactor. Joshua Tyree:

  1. Ignoring the question of how Princess Leia could possibly know where the trash compactor is, or that the vent she blasts open leads to a good hiding place for the rescue crew, why are there vents leading down there at all? Would not vents leading into any garbage-disposal system allow the fetid smell of rotting garbage, spores, molds, etc., to seep up into the rest of the Death Star? Would not it have been more prudent for the designers of the Death Star to opt for a closed system, like a septic tank?

I think I officially have more articles about Star Wars than I do about Apple.

Young Women Often Trendsetters in Vocal Patterns. Douglas Quenqua:

Whether it be uptalk (pronouncing statements as if they were questions? Like this?), creating slang words like “bitchin’ ” and “ridic,” or the incessant use of “like” as a conversation filler, vocal trends associated with young women are often seen as markers of immaturity or even stupidity.


But linguists — many of whom once promoted theories consistent with that attitude — now say such thinking is outmoded. Girls and women in their teens and 20s deserve credit for pioneering vocal trends and popular slang, they say, adding that young women use these embellishments in much more sophisticated ways than people tend to realize.

kottke.org Redesign, 2012 Version. Beautiful.

Simplicity. kottke.org has always been relatively spare, but this time around I left in only what was necessary. Posts have a title, a publish date, text, and some sharing buttons (more on those in a bit). Tags got pushed to the individual archive page and posts are uncredited (just like the Economist!). In the sidebar that appears on every page, there are three navigation links (home, about, and archives), other ways to follow the site (Twitter, Facebook, etc.), and an ad and job board posting, to pay the bills. There isn't even really a title on the page…that's what the <title> is for, right? Gone also is the blue border, which I liked but was always a bit of a pain in the ass.

GitHub Intrusion Reveals Larger Rails Issue. In the last 24 hours there's been a bit of a shitstorm surrounding a trivially executed intrusion on GitHub which is really the result of some [bad] defaults for Rails projects. Chris Acky:

Lets cut to the chase [ironic snip]

  • Every GitHub Repository could be access by anyone as if they had full administrator privileges.

  • This means that anyone could commit to master.

  • This means that anyone could reopen and close issues in issue tracker.

  • Even the entire history of a project could be wiped out. Gone forever.

And here's how it works:

This was possible because of the way that rails handles mass assignment of attributes. You can read a summary here

The simple way of explaining it, is that if developers don't protect against mass assignment, it means that a malicious user can set any value in your models. There are a few solutions that are being thrown around such as using whitelists/blacklists to prevent accessible attributes, but this solution would not be ideal.

All the attacker has to do is add an extra user_id form value in a POST, which can be done using the Web Inspector (!!) with the ID of another user. Suddenly, they now have access to that object.

The “attacker” in question, homakov, brought this issue to the attention of the Rails team, who'd “already heard of this issue”, and they dismissed it.

Ignoring how homakov abused this exploit for the sake of raising awareness about it, I think the real issue here is how the Rails team dismissed it. They are literally sending the following message:

Rails is not in charge, it is your responsibility to secure your application. It is your responsibility to avoid XSS, to ensure that the user is editing a resource that belongs to him, etc.

Of course it's ultimately the developer's responsibility to secure their application, but Rails has always sent the message of “Let the framework handle it for you”. And more importantly, how is it we're living in the year 2012 and “Secure by default” is still not true? Why are our web applications (and more importantly, their frameworks) locked down as much as possible from the start?

On Linked Lists. Stephen Hackett on Daring Fireball style Linked Lists:

Currently, everything on 512 Pixels would be considered a “post” on a DF-style blog. That is, that the headline goes to the permalink for the post itself, not to the linked content.

The most common criticism of this set up is that it seems like I’m double-dipping for page views. (Granted, I’ve gotten a whole two emails about this in 4 years.)

I don’t see this as a viable argument. If you want to go straight to another website that I’ve linked to, doing so is as simple as clicking the link in your RSS reader of choice. No one ever has to view my actual site to get to content I’m linking to. Since I offer all of my content as full-length posts in RSS, no one is getting left out.

The “Linked List” style is the exact same style my website has used since the beginning: most things I post here are links to interesting things I find on the web, and the title for each post is actually a direct link to the page in question, instead of a link to my post about the link. This way, the links are always in a consistent place and my readers never have to hunt around in the body text. Simple.

Stephen's website follows a more classic style of having the linked article referenced in his post, with his headline linking to his post about the link. It's a subtle difference, and he recently polled his readers about switching styles. With the classic style, the website link is somewhere in the body text, but it's in a different place every time. For the record, I don't think Stephen is double dipping.

Even though I prefer the Linked List style, and actually find myself cursing at 512 Pixels for his choice of link style from time to time, I'm glad Stephen is sticking to his guns. It's his website and it's his content and he's sticking with what he believes is best. I think it's admirable to do so in spite of what a survey might tell you is best.

Jeff Bezos Discovers Apollo 11 Rockets at the Bottom of the Ocean. Bezos:

Millions of people were inspired by the Apollo Program. I was five years old when I watched Apollo 11 unfold on television, and without any doubt it was a big contributor to my passions for science, engineering, and exploration. A year or so ago, I started to wonder, with the right team of undersea pros, could we find and potentially recover the F-1 engines that started mankind's mission to the moon?

It's amazing and fantastic when the rich use their wealth for discovery and curiosity.

How one man escaped from a North Korean prison camp. Blaine Harden, in a horrific story of a man born in an North Korean prison camp and his escape:

In Camp 14, a prison for the political enemies of North Korea, assemblies of more than two inmates were forbidden, except for executions. Everyone had to attend them.

The South Korean government estimates there are about 154,000 prisoners in North Korea's labour camps, while the US state department puts the number as high as 200,000. The biggest is 31 miles long and 25 miles wide, an area larger than the city of Los Angeles. Numbers 15 and 18 have re-education zones where detainees receive remedial instruction in the teachings of Kim Jong-il and Kim Il-sung, and are sometimes released. The remaining camps are “complete control districts” where “irredeemables” are worked to death.

This story is not for the faint of heart.

It's sometimes embarrassing to be a human, to know this sort of thing still exists in our world, that left unchecked, this is what humans do to each other. I think we're very much still in the Dark Ages.

We Can't Prove When Super Mario Bros. Came Out. Great journalism by Frank Cifaldi:

Nintendo has an internal launch date for both the NES and Super Mario Bros.: October 18, 1985. For most that would be the end of it: we have an official source stating an exact date, end of story. But I want to know where that date came from, and what it actually means. Besides, Nintendo has been wrong about its own history before.

The Elements of Programmatic Style

Coding Style seems a perennial topic among computer program writers, often leading to fierce debates on one style or another. These style preferences can differ immensely, and it seems everybody has their own take on what's good style.

Over the past few years, I've developed my own, internalized Style Guide for iOS and Mac Cocoa development, and recently I decided to formalize these rules into an excruciatingly detailed guide. I present The Elements of Programmatic Style. The guide details all of my Objective C style and idioms, from naming conventions to whitespace to project settings and everything in between.

As you read through it, you may very well disagree with some of the guidelines I've laid out. In fact, I hope this is the case. This guide is for me and the people I work with professionally, and it may not apply entirely to you. But it's a starting point, and a great document to reference as a starting point for discussion among my contemporaries. Most of this document is even up for debate, as the language evolves, so too will this document. Objective C is currently in a mutant state.

The one aspect of the document not up for change, and this is the root of much debate, are the guidelines for whitespace. My code keeps bare lines between logical chunks of code, and extra (two minimum) between methods or function definitions. Whitespace is free, but more importantly, it greatly affects readability of code. When I see two bare lines, I know this indicates a logical change in the code.

It goes beyond just grouping chunks together or pure aesthetics. Vertical rhythm is essential for legibility (you'll notice this page even pays particular attention to the line spacing of textual and bare lines). This principle has been in use for centuries, and it's all in the name of readability. Our code deserves to be just as readable, if not more, than any other normal text. If you're interested in the subject, I'd highly recommend both this book and this adaptation of it for the web (which appears down for the moment…hmm).

Having a style guide, whether internalized or explicitly documented is important. Whatever your opinion is over white space or the place of a brace, we can all agree consistency trumps all. If you're considering creating your written style document, feel free to use mine as a starting point. Just be sure to tell me how yours differs!

Total Remake. COLON “Hollywood is officially out of ideas”

The difference between 7″ and 7.85″ is everything. Odi Kosmatos on the rumoured littler iPad:

So a 7″ tablet wouldn’t be an “iPad mini”, it would actually be more like an “iPod maxi”. But that’s getting ahead of ourselves. A 7″ or 7.85″ tablet still has to have a clear purpose. To be able to do something far better than the smaller devices (iPod touch and iPhone) and far better than larger devices (the iPads).

The answer is: reading, and long “consumption” sessions. Reading is something that is not comfortably done on an iPad, compared to a dedicated e-reader like a Kindle. That’s because it is relatively very heavy (over 600 grams), and too large. The new iPad is even heavier. You can’t hold these up comfortably. They are also rather expensive, starting at $400, compared to a dedicated e-reader. That’s why my friends with iPads that like reading also bought and use a Kindle. Reading is also cumbersome on an iPod and iPhone, because they are just too cramped and tiny. For comfortable reading, you need a device that’s not more than 400 grams (determined by reading reviews of various e-readers), with a long lasting battery, a screen more tailored to reading than other iOS devices, and it has to be more affordable than an iPad.

At that kind of size and resolution, it's going to be really crisp, but ultimately offer no additional “screen real-estate”. iPhone 4-sized retina displays are already about as detailed as you could want them. The most likely scenario would be iPod touch goes away, as its sales numbers are dwindling, and this thing takes its place.

I still don't buy it. All these rumours still feel like they're begging the question of this being an actual problem. I enjoy reading on my iOS devices, but I imagine that's much more of an edge case than to warrant needing another dedicated device. You know the first thing people will do with it is install Fat Angry Birds on it.

Arsenic in Our Chicken?. Let’s hope you’re not reading this column while munching on a chicken sandwich.

That’s because my topic today is a pair of new scientific studies suggesting that poultry on factory farms are routinely fed caffeine, active ingredients of Tylenol and Benadryl, banned antibiotics and even arsenic.

Objective C and Clang on Windows. If you want to start learning Objective-C on a Windows computer, you’ve come to the right place. This tutorial will show you how to install a compiler and the necessary frameworks to start hacking Objective-C on Windows today.

Using clang gives you goodies like ARC. I haven't tried this myself, so I can't comment on what the GNUstep libraries are like (that is, how much they've kept up to date with modern Cocoa), but if you've ever wanted to try Objective C on Windows, here's how you can get started.

(See also the followup post on compiling with Block objects, too.)

How Twitter accidentally fostered the universal presence. Great editorial by @zpower on the Verge:

Last year I wrote an editorial, The universal status indicator, in which I bemoaned the internet's inability to rally around a standard for communicating presence and contact information. It got extraordinarily positive reaction — there's a real need here. And it turns out that Twitter is uniquely positioned to strike: it already has the universally-understood ID format under its belt. People have heard of it; you're not asking for the moon by starting at square one and requiring people to sign up for yet another service that won't be of any benefit without massive buy-in, the classic chicken-and-egg problem for online startups. And unlike every other service on the market — Facebook and IM services included — Twitter has tight integration with every mobile platform that matters. This is deeply critical; the hooks are already there in iOS, Android, and even Windows Phone.

CSS Media Queries and Asset Downloading. Absolutely essential reading if you're doing any kind of progressive-enhancement on the Web today. Tim Kadlec:

A little while back, I mentioned I was doing some research for the book about how images are downloaded when media queries are involved. To help with that, I wrote up some automated tests where Javascript could determine whether or not the image was requested and the results could be collected by Browserscope for review. I posted some initial findings, but I think I’ve got enough data now to be able to go into a bit more detail.

TED Video: Tim Berners-Lee on Linked Data and the Next Web. If you don't know who Tim Berners-Lee is, shame on you.

For his next project, he's building a web for open, linked data that could do for numbers what the Web did for words, pictures, video: unlock our data and reframe the way we use it together.

It's a brilliant video. It's interesting to see how his thoughts seem to jump around like links on a page.

More on Tim Berners-Lee's Linked Data. The idea which sparked the previously linked TED Talk, in HTML:

The Semantic Web isn't just about putting data on the web. It is about making links, so that a person or machine can explore the web of data. With linked data, when you have some of it, you can find other, related, data.

The 4-inch iPhone. Dan Provost:

I’m with Marco on this one, although I do think a bump in screen size would be nice. I don’t think increasing the height of the screen, while preserving the width, is the right way to go about it though. Rather, I think Apple should keep the 3:2 aspect ratio and increase the physical size until it reaches the 300dpi retina boiling point, maintaining the 960x640 pixel count.

Yes, but why? Why would Apple make a bigger screen. Apple can do lots of things, but they won't unless they have a good reason. I've yet to hear a good reason for this.

The Verge's Paul Miller is Leaving the Internet for a Year to Improve His Writing. Paul Miller:

Now I want to see the internet at a distance. By separating myself from the constant connectivity, I can see which aspects are truly valuable, which are distractions for me, and which parts are corrupting my very soul. What I worry is that I'm so “adept” at the internet that I've found ways to fill every crevice of my life with it, and I'm pretty sure the internet has invaded some places where it doesn't belong.

The extremely well-produced video in the article is also worth a viewing.

Time away from the internet is a very good thing.

(See also the fantastic album linked in the post)

Our Chance

We exist as parts in a grand system: the Universe, a continuum which has been in motion for billions of years. We exist as but the briefest of instants in the grand cosmic scale of this continuum.

Though all of us unique, we share so much from our close progenitors. We are all the same. We are skin and bone and blood. We bleed and cry and laugh, though some of us have forgotten how.

In Carl Sagan's stirring, enchanting words,

It will not be we who reach Alpha Centauri, and the other nearby stars. It will be a species very like us — but with more of our strengths and fewer of our weaknesses.

I'm envious of those palimpsestic humans who come after me, of those who will experience the benefits of the future. They will be better than us, but they also depend on us. Though we exist so briefly in this continuum, we are the ones making the world in which they will live.

Bertrand Russell's 10 Commandments of Teaching. A great find by Maria Popova. Russell:

The Ten Commandments that, as a teacher, I should wish to promulgate, might be set forth as follows:

  1. Do not feel absolutely certain of anything.

  2. Do not think it worth while to proceed by concealing evidence, for the evidence is sure to come to light.

  3. Never try to discourage thinking for you are sure to succeed.

And these are just the first three!

Paul Graham's Frighteningly Ambitious Startup Ideas. If you're the kind of person who takes an interest in topics Paul Graham writes about, chances are you've already read this. But if it's been sitting in your to-read box for too long due to its length, I'm giving it a good nod it's worth your time. Two related bits which really stuck out to me:

This is one of those ideas that's like an irresistible force meeting an immovable object. On one hand, entrenched protocols are impossible to replace. On the other, it seems unlikely that people in 100 years will still be living in the same email hell we do now. And if email is going to get replaced eventually, why not now?


One of my tricks for generating startup ideas is to imagine the ways in which we'll seem backward to future generations. And I'm pretty sure that to people 50 or 100 years in the future, it will seem barbaric that people in our era waited till they had symptoms to be diagnosed with conditions like heart disease and cancer.

These are both really interesting. Try to think of the difference between today and 100 years ago, and project that to 100 years from now and today.

Light Table IDE Concept. Former Visual Studio developer Chris Granger:

Despite the dramatic shift toward simplification in software interfaces, the world of development tools continues to shrink our workspace with feature after feature in every release. Even with all of these things at our disposal, we're stuck in a world of files and forced organization - why are we still looking all over the place for the things we need when we're coding? Why is everything just static text?

Even if this idea doesn't seem feasible, just the thought of real innovation in this space is exciting.

Apple, Failure, and Perfect Cookies. James Montgomerie:

I think this highlights two things that many other organisations would do well to learn. First, what you have is what it is, it’s not the effort that was put into it. If it’s not worth keeping, it’s not worth keeping. Second, if you want the best results, you need to give good people the room to start over without feeling like they are failing.

Apple, Taste, and Steve. Max Temkin:

The same is true of the company he made. Steve’s beliefs are in Apple’s bone marrow, and Apple is a company with radically human values. Steve Jobs’ Apple is a progressive, egalitarian company that believes in making technology both available to and usable by everyone. This isn’t just a business strategy for Apple, it’s a philosophy. Jobs went around saying things like, “We don’t get a chance to do that many things, and every one should be really excellent. Because this is our life. Life is brief, and then you die, you know? And we’ve all chosen to do this with our lives. So it better be damn good. It better be worth it.”

The Manual. Excellent, new print periodical:

Three beautiful, illustrated hardbound books a year, each holding six articles and six personal lessons that use the maturing of the discipline of web design as a starting point for deeper explorations of our work and who we are as designers.

Publishing is not dead, and neither is print. But I think we're experiencing a changing of the guard.

Our Quest for Fire

When I go away or have a guest for a weekend, I tend to almost entirely unplug from the internet. I stay off Twitter, I don't check emails, my feed reader stays closed. It's refreshing and recharging, and I highly recommend taking this approach.

When I come back to my normal routine, I'm often faced with an overload of backlogged content. I have a compulsion to read all the news item in my feeds, to check every email and thing on Twitter. I need to be caught up. To me, this is like a reversed form of Caterina Fake's Fear of Missing Out:

FOMO -Fear of Missing Out- is a great motivator of human behavior, and I think a crucial key to understanding social software, and why it works the way it does. Many people have studied the game mechanics that keep people collecting things (points, trophies, check-ins, mayorships, kudos). Others have studied how the neurochemistry that keeps us checking Facebook every five minutes is similar to the neurochemistry fueling addiction. Social media has made us even more aware of the things we are missing out on. You're home alone, but watching your friends status updates tell of a great party happening somewhere. You are aware of more parties than ever before. And, like gym memberships, adding Bergman movies to your Netflix queue and piling up unread copies of the New Yorker, watching these feeds gives you a sense that you're participating, not missing out, even when you are.

This Fear isn't about missing out on things which are happening in the real world because I'm out experiencing things in the real world! But instead, it's a fear of missing out on what's been happening in all the other parts of the world where I wasn't. We're so connected to each other these days that if I don't know about new software, or drama between tech writers or tech podcasts or whatever, I'm left with a feeling like I've missed a bunch.

But I haven't really missed anything.

There's an ever increasing torrent of information we flood ourselves with every day. Thousands of tweets and unread counts to be rectified, and if you read through them and actually evaluate what they say, you'll find you're not really missing anything important at all. Any of the “big news” gets rehashed over and over again, so even though constant attention wasn't paid to it, somehow you still found your way to it. And everything else didn't really matter very much in the first place.

I've been ruminating over this in my head for a while now. Why do I have a compulsion and anxiety to read it all? Why do I have to know? Why does it feel so important when I know it isn't?

We humans are amongst the curiousest animals on this planet, in both senses of the word. Our curiosity has propelled us to become the animals we are today. Our ancestors quested for and mastered fire to propel them forward. Today, we are on a Quest for Knowledge, one that burns deeply within us all. Like fire, those who hold knowledge have great advantages over those without. And like fire, knowledge quickly spreads. But like fire, too much knowledge can both burn us and burn us out.

Paul Miller: Peeking. Another entry in the fascinating saga of Paul Miller's internet expatriation:

It's 2012 now, and surfing is so embedded in our culture that we don't even speak of it: it's what the internet is for, and it's the method by which we communicate, learn, and experience. That's not to say Negroponte didn't identify a potential problem with the internet: a troubling signal-to-noise ratio. In fact, it's in large part thanks to this signal-to-noise ratio that I'm currently attempting an internet-free existence. But to say we're using the internet entirely wrong, and have yet to make it a “productive use of one's time,” is a little laughable. Yes, a culture is shaped by its tools, and the internet was shaping me a little too much, but for many people the internet remains a servant, not a master — a window, not a door.

The ChucK Audio Programming Language. An interesting strongly-*timed* audio programming language:

ChucK is a new (and developing) audio programming language for real-time synthesis, composition, performance, and now, analysis - fully supported on MacOS X, Windows, and Linux. ChucK presents a new time-based, concurrent programming model that's highly precise and expressive (we call this strongly-timed), as well as dynamic control rates, and the ability to add and modify code on-the-fly. In addition, ChucK supports MIDI, OSC, HID device, and multi-channel audio. It's fun and easy to learn, and offers composers, researchers, and performers a powerful programming tool for building and experimenting with complex audio synthesis/analysis programs, and real-time interactive control.

PL101: Create Your Own Programming Language. Interesting new online course taught by Nathan Whitehead, with weekly lessons slowly building a compiler with the PEG.js parser generator.

In this class you will learn how to use the principles of programming language design to implement your own working programming language in JavaScript. You'll be able to show off the finished product to your friends and prospective employers on a simple demo webpage.

On Buying Coda 2 and Diet Coda

If you're a web developer you probably should buy yourself a copy of Coda 2 which comes out today, May 24 2012. If you're even thinking about purchasing it, you should do so today because it's 50% its regular price today only. You'll get it for $50 instead of the regular $100.

As the app is available both from the App Store and directly from Panic themselves, this presents a bit of a dilemma: Should I buy Coda 2 from the App Store or from Panic? you might ask yourself. Here's my opinion:

Buy Coda 2 directly from Panic. The only pro to buying from the App Store is you get some iCloud syncing of your Sites (that is, your website configuration in Coda, not the files) and your Clips (similar to Xcode Snippets). This syncing isn't really anything that couldn't also be done with Dropbox if Panic later so decides. So you won't miss out on much.

But the benefits of buying directly from Panic far outweigh the cons: You support Panic, an independent developer (remember: App Store developers only get 70% of the revenue) and more importantly, you're less vulnerable to Apple's whim with regards to sandboxing and what they'll allow in the Store. If Apple decides tomorrow apps can no longer access the Terminal, then your App Store copy of Coda 2 won't be able to be updated very much. You also might have to jump through more hoops to browse or edit your own files. And who knows what else Apple might start excluding from their Sandbox.

There is of course the possibility Apple will add new features to OS X which are only available to App Store apps, but I can't imagine anything too groundbreaking which Coda will really benefit from.

As such, I'd highly recommend you just buy it straight from Panic.

WWDC 2012 Wish List

This year's WWDC is fast approaching and while my full predictions list will come in a future article, I wanted to take a moment to list some items I'm really hoping we'll see come June 11.


As a user, the main feature I really want out of iOS 6 are Safari Extensions. I've become a big fan of Safari Extensions on the Mac, and those such as AdBlock, NoMoreiTunes, Beautipedia and Shelfish are all sorely missed when browsing on my iPad and iPhone.

Since Safari extensions are written with HTML, CSS, and Javascript, I would not at all be surprised to see them added to iOS. Theoretically, the same extension would work unchanged between desktop and mobile platforms.

Bonus points would be given if my extensions could be kept in harmony across all my devices (but let's not get too hasty).


The biggest and most important feature I want as a developer, is a major overhaul to the text system of iOS. UITextView is the main text class on iOS, which is what you use in any application which involves scrollable text editing. It gets the job done, but it's extremely basic. On its own, it can't be used for rich text in any capacity, so if you want to style your text, you've either got to use some embedded WebView hackery or roll your own text view class entirely from scratch.

Neither approach is impossible, but it's a substantial amount of work and is really in need of enhancement from Apple. Such an upgrade would make it much easier to develop apps for blogging, code editing, word processing, email, etc.

On the Mac, we get NSTextView which has all these capabilities and more. But Cocoa on the Mac is still deeply rooted in its NeXT heritage which means the assumptions it makes are much more valid for the underpowered hardware of yesteryear. By combining newer technologies of iOS like Core Animation and Core Text (also available on the desktop), we could have a really powerful and efficient text system.

In addition to adding features for styling text, I think it's also time to add some new standard features which nerds have enjoyed for ages in our text editors.

For example, the “ (quote) character is ambiguous and degrades the typographic quality of whatever document I'm working on. 99% of the time, it would look better if this was automatically transformed into the proper “ or ” glyph (if you're using an app which doesn't do this for you automatically, you can use Opt+[ and Opt+Shift+[ respectively for both OS X and iOS physical keyboards, or hold down the ” key on the iOS on-screen keyboard).

If I enter the “ character, you can be sure I'm going to want an ending ” character later on, to end my quotation. Lots of word processors and programming text editors will add the closing quote mark automatically, and then place the cursor in between. If, as is habitual for many typists not used to this nicety, I type an end-quote anyway, skip it and move my cursor for me. The same goes for other character pairs like parentheses (), braces {}, and brackets [].

Implementing these features in an app is neither extremely difficult nor is it trivial (for example, what happens when text is pasted containing one of these characters?), but these implementations should be standardized by Apple, and these features should have been added ages ago.

Bonus Round

Related somewhere in between a user and developer feature, I'd really love to see an improvement to physical keyboard support in iOS 6. I've been doing a substantial amount of typing on the iPad with a Bluetooth keyboard lately, and I'm really starting to agree with Andy Ihnatko about some of the deficiencies of this practice. An assortment of sore spots:

  • There is no “Command+Tab” equivalent for switching apps using the keyboard alone. Instead I've got to touch the device. This makes it more of a pain if I'm writing an article in a text editor and trying to keep all my links in order from Safari (which, incidentally, is exactly what I'm doing right now).
  • Similarly, there's no keyboard shortcut for going to the Homescreen, either.
  • When I press and hold down on a key, there's a very slow key repeat and there's no way to configure this. This is reallllllllllly annoying (and so was that).
  • As an app developer, there's no way for me to enhance my application to support keyboard shortcuts. This could also go under an enhancement to the Text system as described earlier. Aside from standard shortcuts like “Command+C” for Copy and the like, as a developer I can't add my own shortcuts for, say, “Command+N” for New Document.

There's been lots of talk lately about iOS 6 and any remaining “low-hanging fruit”. I think most of the low-hanging fruit has already been dutifully snatched up, but I think there are lots of little pieces of iOS in desperate need of ironing out. Apple is trying to make iOS their platform of the future, and it seems like these issues will have to be addressed in order to accomplish that.

The Tools and The Professional

On the most recent episode of the –force podcast, software developer Ash Furrow discusses professions and how tools do or don't make you a professional in a field.

Anyone can program a computer, you can program a computer with flow-charts if you wanted to. The software exists that will let you make programs with flow-charts. It's stupid.

You don't get to call yourself a software developer, though, if you can't think about not how to do something, but why you're doing it that way.

I disagree because I don't see the implication following. The tool I'm using does not necessarily mean I can't be a software developer in any true sense of the term (or more generally, my tool does not necessarily stop me from being a professional). If a tool abstracts more of the complicated bits away from me, it's perfectly possible for me to be just as good as a software developer, and in fact, I think in some ways it could make me an even better developer.

I'll use an example which hits more close to home for software developers to illustrate: memory management.

The original way for a developer to manage memory is to do so manually. This means the developer must have an intimate working knowledge of the computer environment, and must understand how to control the memory of the system.

A newer way to manage memory is to have the system manage it for the developer. This relieves the developer of having to worry about doing it, and thus makes it easier for the developer. Providing the automatic memory management system is implemented properly, this means the developer doesn't need to understand the system in the same way as traditional memory management.

It does not mean the developer can be completely careless about computer memory, it only means they don't have to care about, and thus justify, the terms relieved by the system.

So if the details of computer program writing are abstracted away from me while using a flow-chart based development tool, this does not necessarily mean I'm not a real developer. I don't know about all those details because I don't have to. The same could be said for using a high-level programming language like C over using straight Assembler instructions. As a C programmer, I don't have to know about registers, all I have to know is how to store values in variables. It doesn't preclude me from being a software developer any more than using a flow-chart development environment over C would.

The Revolution Will Not Be Televised. Cindy Au:

People argue all the time that quality entertainment can’t be made for less than some number ending with no less than 5 zeros. Well people, you are wrong. Just look at The Silent City. Filmed on a shoestring budget, using natural locations instead of building massive sets — the result is pretty incredible, and it’s making me wonder more and more if the barriers to creating quality entertainment aren’t really so much about money, as cable companies, networks, and the media would argue, but who controls the money and who decides how things get made.

I keep thinking in a hundred years when our descendants look back on this time, they'll think it's so ludicrous how we felt only big companies could do substantial things.

Fan-Commissioned Art. Andy Baio:

As Kickstarter's exploded in popularity, I've started to see signs that there are others like me – a movement of fans as producers, commissioning work from their favorite artists instead of waiting for the artists to come to them.

To me, it feels like the next logical step in the evolution of fan funding. Already, fans are expecting to witness the creative process with behind-the-scenes progress updates and feedback forums. Now, they may actually help decide what gets made. If I'm right, the implications for working artists is potentially huge, providing an unexpected source of revenue, as well as potential creative headaches.

On the Surface

On Monday, Microsoft’s Steve Ballmer took to the stage to unveil Surface, a new tablet PC they’ve designed and built in-house and which they expect to set the stage for Windows 8 tablets to come. It’s an interesting move for Microsoft for a number of reasons and I think it signifies the start of something new from the company, having learned many lessons from both its successful Xbox and unsuccessful Zune line.


There are three interesting things about the name ‘Surface’:

  1. They’ve actually already used the name Surface in an earlier product, the larger table-sized devices mainly sold to casinos or restaurants, which always felt more like proof-of-concept devices rather than full-fledged products in their own rights. Though the older Surface has since been renamed PixelSense, I think it’s safe to assume they’ve learned some lessons in touch control over the years (it should be noted, though, the Table-Surface devices used an entirely different mechanism for detecting touches, though there’s no reason interface paradigms couldn’t be shared between the two).

  2. The product name is completely devoid of the term “Windows”, a term for which Steve Ballmer appears to have a major crush on. Even their touch-based phone products can’t escape the Windows moniker, resulting in the Windows Phone 7 mouthful. Though the Surface will run Windows 8, they aren’t calling it Windows 8 Surface.

  3. I think it’s a great name, and at least initially, sounds way better than iPad (do you remember how ridiculous ‘iPad’ sounded when you first heard it? It still sounds silly today, but we’ve become used to it). Surface is clean and light (contrast with ‘Slate’) and calls out exactly to how we think to interact with these devices.


The Surface is to ship with two different models, one with an ARM CPU and one with an Intel CPU. It’s not clear to me why they’re offering both and it feels like this was a design decision they’ve failed to make. If consumers can’t figure out why to buy one over the other, they won’t buy either.

The Intel model will presumably run faster, but that also means it will run hotter and have less battery life. I would rather have a slower CPU and get to use it more than a fast, but dead, CPU.

Both models share a very similar and sleek industrial design which looks similar-but-not-identical to the iPad and I think they’ve done well to differentiate themselves here.

One thing which really stands out is both devices will have heat vents along their perimeter, and heat vents mean fans and fans mean noise. This is an unfortunate tradeoff which means the device is probably going to be uncomfortable to hold (more on that in a second) and it’s probably not going to have that great of a battery life if the CPU is so hot (and thus power-hungry) as to require fans.

The device also comes with a built-in Kickstand (their term, not mine) to have it propped up for typing or watching video. I think Kickstand is a bad name as it makes me think of the flimsy metal dingus which never quite manages to keep my bicycle from falling over. Though the mechanism looks a bit more sturdy, it still introduces hinge to the device, another wrinkle in the simplicity. Will the hinge get loose and flap around? What happens if it breaks off?

Apple solves the propping up problem with their magnetic Smart Covers which clip to the side of the device and fold intro a triangle to support the device from behind. The cover is attached when you need it and quickly detached when you don’t.

The difference in propping mechanism, and the inclusion of hot CPUs thus heat vents lead me to believe the Surface is a device Microsoft does not intend its users to hold for great lengths of time, which I feel is a fundamental flaw of the device.

Apple encourages its users to touch and hold its devices. Holding the device in your hands gives it anthropomorphic qualities; we cradle our devices, pick them up and bring them with us. But when a device is just sitting on a desk, whether a normal notebook computer or a Surface propped up with a Kickstand, the device remains part of the Other—it’s just an appliance and we don’t have an emotional attachment to it. This is the reason why iPad has been so successful, it has succeeded in leaping into our hands, and even though it’s still just a computer, it feels much less like an appliance and more a part of our lives.

The Beginning

On the surface, Microsoft’s new device seems like a way for them to show other PC manufacturers how Windows 8 devices should look and function. But as I’ve looked a little deeper, it really seems to me like this is less Microsoft showing manufacturers and more showing up manufacturers.

It’s no secret the PC market has entered a big period of flux in the last year. HP is probably leaving the PC market while other PC manufacturers are struggling as well. They’ve been squeezed for margins ever since PCs became commoditized and it’s probably starting to sting more and more with each passing quarter, especially as Apple is usually the only one seeing any growth lately.

If all the PC manufacturers leave the personal computer industry, where does that leave Microsoft and its cash-cows Windows and Office? I think the Surface is evidence Microsoft has been watching this trend and has realized if it wants to keep its foundation of Windows it must give itself a platform, a surface to stand on if you will, in order to keep its other businesses alive.

Microsoft Surface is the start of that hardware product platform. It doesn’t have a price. It doesn’t have a launch date. It’s going to need a lot to make an impact. But it’s a start.

On Concepts and Realities. Chris Granger on his Light Table IDE:

One size does not fit all.

I mean this in virtually every sense. The things I showed in my concept video weren't meant to fit all scenarios - in that specific case they happen to fit Clojure very well. One advantage of focusing on principles over features though, is that what Light Table does and how it functions doesn't have to be the same for every language.

I think this is a really important concept, principles over features. The same fundamental goals can apply no matter how they end up as features. And guess which the users care more about?

Douglas Engelbart's Chorded Keyboard as a Multi-touch Interface. Awesome implementation in the browser for iPads:

A chorded keyboard works by using combinations of finger presses to signal a keypress (for example, pressing both the first and second finger down simultaneously might send an “A”, while pressing the first and third finger down might send a “B”). With 5 fingers, there are 32 possible binary combinations. Leaving out the rest state (all off), and a drag state (all on), we have 30 useful mappings. With 26 letters, that even leaves a few for high level text commands (such as space, delete, and enter).

Space: A Multi-User Developer Environment in the Browser. Absolutely stunning application of technology here, you've really got to check out the video. They've got a more detailed post about it elsewhere on their site. It's tools like these that really make me feel our current dev tools are totally antiquated.

Bonus points:, they like outer space too:

Support Space! The real kind. You know what we're talking about. The big, dark, unexplored, wilderness that is the universe. NASA's future at the moment is uncertain and could use your help. Be sure to let them know just how important manned space flight is to you.

Bret Victor's "Inventing On Principle" Talk from CUSEK 2012. If you are a software developer you really ought to spend the 45 or so minutes to watch this video. It continues in the trend of “Amazing new tools which make our current tools seem bumbling and archaic”.

There are two really important things about the video (which I believe was part of the inspiration for Light Table):

  1. Bret's principle for inventing is about giving tools to creators so that they can see the results of their creation as quickly as possible. He demos some fantastic tools he's built on this principle and they're astonishing. But as you watch the video, try not to be distracted by the tools. Those tools are only for illustration of the second, more important part of his talk.

  2. The more important part is finding your own principles and inventing around those. I won't spoil the video, but a quote I absolutely loved was something like this:

    Social activists fight [for a principle] by organizing but you [as a technologist] can fight by inventing.

Twitter Is Working on a Way to Retrieve Your Old Tweets. Jenna Wortham:

“We’re working on a tool to let users export all of their tweets,” Mr. Costolo said in a meeting with reporters and editors at The New York Times on Monday. “You’ll be able to download a file of them.”

Generation Sell. William Deresiewicz writes about my generation:

That kind of thinking is precisely what I’m talking about, what lies behind the bland, inoffensive, smile-and-a-shoeshine personality — the stay-positive, other-directed, I’ll-be-whoever-you-want-me-to-be personality — that everybody has today. Yes, we’re vicious, anonymously, on the comment threads of public Web sites, but when we speak in our own names, on Facebook and so forth, we’re strenuously cheerful, conciliatory, well-groomed. (In fact, one of the reasons we’re so vicious, I’m convinced, is to relieve the psychic pressure of all that affability.) They say that people in Hollywood are always nice to everyone they meet, in that famously fake Hollywood way, because they’re never certain whom they might be dealing with — it could be somebody who’s more important than they realize, or at least, somebody who might become important down the road.

Well, we’re all in showbiz now, walking on eggshells, relentlessly tending our customer base. We’re all selling something today, because even if we aren’t literally selling something (though thanks to the Internet as well as the entrepreneurial ideal, more and more of us are), we’re always selling ourselves. We use social media to create a product — to create a brand — and the product is us. We treat ourselves like little businesses, something to be managed and promoted.

The self today is an entrepreneurial self, a self that’s packaged to be sold.

It's a fantastic article and the above quote highlights the fundamental disdain I've had for “social” websites recently.

Backup Your Data

After reading Mat Honan's terrifying tale about being hacked (hard) and losing all of his data, it again reminded me how few people back up despite how trivial it is. Mat lost:

  1. His entire iPhone restore data.
  2. His entire iPad restore data.
  3. Access to all his Gmail email.

And probably much more. This included photos which I'm assuming were priceless. And he didn't have any recent backups. This should be a warning to everyone currently running without a backup.

Thankfully, this is a pretty easy and cheap thing to remedy. Here's what I'd recommend:

Buy an external harddrive and use Time Machine, which is included with every Mac. This will back up your computer continuously so long as you've got the drive connected. This will give you a local backup of all your data, and aside from the initial (and small) cost of an external harddrive, won't incur any ongoing costs. This also means when your computer goes belly up (and that's a when, not if) you'll have a copy of your data close by. Unfortunately, this does not protect you if your home is robbed or if your computers are damaged in some kind of natural disaster. And more importantly, most people rarely connect their external backup drives anyway. If you don't back up regularly, you might as well not be doing it at all.

So additionally, I'd highly recommend signing up for an internet-based service like Crashplan or Backblaze. I've been using Crashplan for close to a year now and it offers me unlimited backup storage for $5 per month. That's a pretty awesome price for the peace of mind a third-party backup brings. These services continuously backup your computer to the internet, where your data is encrypted and copied many times over. This means if something happens to your computers, you can always restore no matter how bad the damage. You'll never lose your data.

I've been happy with Crashplan so far, but Backblaze looks just as good (if not better). Evaluate them both and pick one.

Github Fixes Notifications. Github:

Today, we’re releasing a new version of our notifications system and changing the way you watch repositories on GitHub. You’ll find the new notifications indicator next to the GitHub logo that lights up blue when you have unread notifications.

Possibly the most annoying problem with Github is totally absolved.

Advertising and Mixed Motives. Guess the authors of this paper:

Furthermore, advertising income often provides an incentive to provide poor quality search results. For example, we noticed a major search engine would not return a large airline's homepage when the airline's name was given as a query. It so happened that the airline had placed an expensive ad, linked to the query that was its name. A better search engine would not have required this ad, and possibly resulted in the loss of the revenue from the airline to the search engine. In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.

Bret Victor's Scrubbing Calculator. Takes Soulver's idea to a whole new level:

This page presents an idea for exploring practical algebraic problems without using symbolic variables. I call this tool a “scrubbing calculator”, because you solve problems by interactively scrubbing over numbers until you're happy with the results.

I could spend a lot of time just re-implementing things Bret is doing and I wouldn't feel like a single second is wasted.

Shopify for iPhone. I'm thrilled to announce the application I've been working on at Shopify has finally shipped. It's a completely new application and it should make Shopify shop owners very, very happy.

TED is the Anti-News

When I was younger, I used to watch CNN all Summer long. There was always something new to hear about the world and I got to learn about what was happening as it happened. Knowing about what's happening in the world is part of our new quest for Fire after all.

But as I grew up, I started to grow apart from television news as it seemed to change. September 11th happened and channels like CNN introduced the news ticker, which constantly streamed textual headlines while you were listening to the headlines (yo dawg). News stations like CNN literally started playing a beating war-drum sound between segments of broadcast, as America entered a war in Afghanistan and invaded Iraq. It was no longer exciting or fascinating, it was just scary and sad.

Around that time is when I stopped watching the news and when asked if I'd heard about such and such an event, I'd just say “I don't watch the News, because it's really just Bad News.” Had I heard about a hurricane or a tsunami or a mineshaft collapse? Had I heard about economic downturn and corporate scandals? In time, I eventually heard about some of these events, but not knowing didn't negatively affect me. In fact, I've felt better for not knowing.

It's strange to live in a world where hearing about wars or stock prices or even copyright lawsuits is more newsworthy or interesting than learning about scientific or artistic discoveries. People originally started sitting down to watch the evening news to get a sense of what was happening in their greater community, but that's not what's shown on the news these days.

This is why I feel TED fills that role better than what's on the news. TED is a set of conferences for the world's greatest thinkers and doers, scientists, politicians, engineers, teachers, artists. All the greatest minds giving 20-minute or shorter presentations on new ideas and discoveries. TED videos are every bit as professional and entertaining as television news, and are just as informative. But the subject matter is positive and fosters inspiration, instead of fear, in the community.

Every night when I get home from work and want to catch up on what's new in the world, I visit TED and watch a video or five. I learn and become inspired to do good in the world. I could probably link to just about any video hosted on the site, but I've narrowed it down to a few favourites for your perusal.

  1. How fiction can change reality
  2. The real-life culture of bonobos
  3. The linguistic genius of babies
  4. The birth of a word
  5. Erin McKean redefines the dictionary
  6. Jeff Hawkins on how brain science will change computing
  7. What we learn before we're born
  8. Chip Kidd: Designing books is no laughing matter. OK, it is.
  9. Tim Berners-Lee on the next Web
  10. Ayah Bdeir: Building blocks that blink, beep and teach


A Suggestion to Improve Objective-C's new Autoboxing support

In the 2012 edition of the Clang compiler, Objective-C gained support for some new language features like NSArray and NSDictionary literals, @[] and @{} respectively. It also gained support for turning numbers into NSNumber by wrapping the number literal (or expression) with @(), or in some cases simply prefixing with with @.

So now to make an NSArray of NSNumbers we can do the following:

NSArray *numbers = @[@1, @2, @(1+2), @YES];

That's a nice improvement, but it's still completely ridiculous to type. Instead, I suggest allowing the following:

NSArray *numbers = @[1, 2, 1+2, YES];

Drop the autoboxing typing inside the array. The compiler should already know it's an Objective-C array because it's @[], so why not just automatically wrap up any scalars as objects for us?

Etymology of “Orange”. I was wondering about this the other day, and I figured it would have been the other way around:

The word orange is both a noun and an adjective in the English language. In both cases, it refers primarily to the orange fruit and the colour orange, but has many other derivative meanings.

The word derives from a Dravidian language, and it passed through numerous other languages including Sanskrit and Old French before reaching the English language. The earliest uses of the word in English refer to the fruit, and the colour was later named after the fruit. Before the English-speaking world was exposed to the fruit, the colour was referred to as “yellow-red” (*geoluhread* in Old English) or “red-yellow”.

Measure Results, Not Hours, to Improve Work Efficiency. Robert C Pozen:

By applying an industrial-age mind-set to 21st-century professionals, many organizations are undermining incentives for workers to be efficient. If employees need to stay late in order to curry favor with the boss, what motivation do they have to get work done during normal business hours? After all, they can put in the requisite “face time” whether they are surfing the Internet or analyzing customer data. It’s no surprise, then, that so many professionals find it easy to procrastinate and hard to stay on a task.

There is an obvious solution here: Instead of counting the hours you work, judge your success by the results you produce. Did you clear a backlog of customer orders? Did you come up with a new idea to solve a tricky problem? Did you write a first draft of an article that is due next week? Clearly, these accomplishments — not the hours that you log — are what ultimately drive your organization’s success.

If I were starting a company, one thing I'd be sure to do is to rethink the 40 hour work week. 40 hours just seems like something we all accept as the way, but nobody seems to stop and wonder if it's for the best. Maybe it is, but I'd be willing to bet a workplace could be made much better with fewer hours.

Thinking Out Loud: Clients For Twitter And For GitHub Notifications. Giles Bowkett raises a good point about “building windows onto a stream of data you want to watch”:

In every form of Internet communication, the number of messages you give a shit about is always much smaller than the total number of messages you receive. Software design for messaging clients virtually never acknowledges this fundamental and consistent reality.

The cult of inbox zero is a ship of fools. It is the information-age equivalent of a slave religion, where you glorify the most obedient slave to an insane master. You should not get a high five and a merit badge every time you get to a state where you can calmly and intelligently choose what to do next; being able to calmly and intelligently choose what to do next should be your default state.

People really need to design messaging systems around the obvious reality that give-a-shit is a precious and rare treasure. For some insane reason, this is not what we do; most software is designed with the utterly bizarre assumption that all incoming communication receives a standard, uniform, and equal subdivision of give-a-shit.

Software developers are inherently lazy. We build programs to automate things. But sometimes our laziness hurts everyone, when we build the laziest possible solution, too.

I'm with Giles, when it comes to displaying information, it's a much better idea to do so intelligently. Otherwise we're just building whiz-bang-ier lists of stuff.

More from Giles Bowkett and his Hacker Newspaper. Lists are kind of a cop-out, too:

This might fall on deaf ears, but if there's anything Web geeks could use more of, it's typography. Imagine how much nicer it would be to read log files if your log file reader automatically reformatted your log files to emphasize the information you were searching for, not in a garish, clumsy way but in an elegant, readable way. Imagine how much nicer Google searches would be if they exploited the sophisticated structure of newspaper-style layouts. A lot of what people on the Web are doing with typography reinvents a smooth, polished, and very round wheel. Since it's existing knowledge which loads of people have failed to notice, you can get a technological edge on your competition using technology invented in the 1800s. We might be done with the printing press for day-to-day news, but disregarding the craft of typography is a ridiculous mistake.

By typography he doesn't mean “Hey what font is that?” but instead means “How is typography, both face and size choice, used to provide visual structure?”

All Over the Maps. Wouldn't it be nice if Apple hired this guy?

By contrast, David Imus worked alone on his map seven days a week for two full years. Nearly 6,000 hours in total. It would be prohibitively expensive just to outsource that much work. But Imus-a 35-year veteran of cartography who's designed every kind of map for every kind of client-did it all by himself. He used a computer (not a pencil and paper), but absolutely nothing was left to computer-assisted happenstance. Imus spent eons tweaking label positions. Slaving over font types, kerning, letter thicknesses. Scrutinizing levels of blackness. It's the kind of personal cartographic touch you might only find these days on the hand-illustrated ski-trail maps available at posh mountain resorts.

Then there's the Lovely watercolor maps based off OpenStreetMaps:

Well, now, this is gorgeous. Stamen Design overlaid watercolor textures on OpenStreetMap map tiles to show you what it would look like if your favorite watercolorist designed Google Maps.

And finally,

This is a zoomable, dragable version of the Viele Map of Manhattan, a map drawn in 1865 of the original boundaries and waterways of Manhattan. It is still in use today by developers, civil engineers, and architects.

Cocoaheads NYC Tonight at 630 PM. I'll be giving a talk tonight at the New York City Cocoaheads. If you're in town, stop on by and have a chat! My presentation tonight:

A debugging tool that works over the network to do real-time inspection and manipulation of iOS apps (that is, without need to set break points). It does this with a REPL console with some handy tools for quickly making changes and testing them.

See you there!


I like giving presentations and talks at groups like Cocoaheads, and I’ve given a few over the years. Most of the talks show off some new piece of code I’ve been working on and most of the time, the project is open sourced.

After the presentation, I like to host the slides somewhere and I also usually link to the code on Github, but it’s hard to keep everything organized.

This has lead me to create my Presentations repository on Github. Here you’ll find all the presentations I’ve given, in both PDF and Keynote format. Sometimes I’ll even include an extra readme file with more information or links. You can clone the repository and always be kept up to date with the slides, if you’re interested.

But it gets better. Instead of just linking to any relevant code, I’ll also be including submodule links to the code repositories for each presentation. So by simply keeping your clone up to date, you’ll also have a way to get all the goodies I release, too, without having to look anywhere else.

The repo is a little bleak right now, but I’ll be adding new presentations and source code links as make new presentations, and I’ll add some older ones when I get the chance.

Human After All

It’s no secret I’m not a very big fan of Android. I don’t like its visual design direction and I don’t like that it’s fiddly. I found developing for the platform to be an exercise in patience and hostility. I’d much rather be developing for and using iOS. But if you know me at all, you already know all this.

The company where I work has a mobile development team of around fifteen, including me, of mostly iOS developers, and about two serious Android developers. As is common on the internet and so too at my workplace, the Android users are often teased and meant to feel belittled for their choices. Though they brush it off most of the time, I can sense some frustration in the way they react.

And truth be told, this isn’t limited to just my workplace. I hear tons of putdowns towards people who choose to use and develop for different platforms, and I’m guilty of doing it, too. Though putting down others who are different is nothing novel in human culture, I’d like to think we computer science professionals are better than this. I want to hold myself to a higher standard.

But you know what the really remarkable thing is about those Android developers at my work who are subject to the teasing, who are mocked for using a perceived inferior operating system? They don’t spend their time mocking back at the others, instead they spend their time improving their platform of choice. They’re making great apps and they’re moving the state of the art forward. Whether or not Android is a “good” platform doesn’t matter to them: they want to make things better.

So instead of wasting your time teasing someone for using a different kind of phone than you, think how much better your time could be spent if you instead invested it in making the world even just a little bit better.

November is “National Novel Writing Month”. Also known as NaNoWriMo, it’s a chance and a personal pledge to write a 50’000 word novel in 30 days.

Do not edit as you go. Editing is for December and beyond. Think of November as an experiment in pure output. Even if it’s hard at first, leave ugly prose and poorly written passages on the page to be cleaned up later. Your inner editor will be very grumpy about this, but your inner editor is a nitpicky jerk who foolishly believes that it is possible to write a brilliant first draft if you write it slowly enough. It isn’t. Every book you’ve ever loved started out as a beautifully flawed first draft. In November, embrace imperfection and see where it takes you.

I’ve always wanted to do this, but I’ve also always made excuses why I couldn’t do it (“I’m in university,” he said, “I have a part-time job,” he said, “The stray cat I adopted just had four kittens and they’re adorable but literally four handfuls and it’s stressing me out beyond belief,” he said…OK that last one is legitimate).

This year, I’ve signed up. If you sign up too, get in touch!

Andy Ihnatko's Advanced Viewing of “Wreck-It-Ralph”. It’s amazing to think of Disney and Pixar studios as technology-driven companies, but that’s what they really are. Mostly all powered by Macs these days, too:

Disney Animation is a major Mac house. Every presentation and every demo that incidentally included a screenshot of any kind of process had a MacOS menubar in it.

An in-house Mac developer gave a detailed walkthrough of “Raconteur,” Disney’s custom-made app for building, presenting, and managing digital storyboards. It was a great demo. It made me try to imagine those segments that Walt Disney used to present on the old “Walt Disney’s Wonderful World” show, in which he explained the animation process.

“We use CALayer objects for the thumbnails, and when the animator plays through the scene with dialogue, we’re using the IKSlideshow class…”

Also the film sounds amazing.

Infinite Mario AI. This is from a few years ago, but I never got around to linking to it. There’s a great set of videos of a Super Mario World-like game being controlled by AI. Here’s some of an interview with its creator, Robin Baumgarten:

The main part of the heuristic is a simple time-constraint: try to get to the right of the screen as fast as possible. Therefore, the path-cost function is the spent time since the start of planning, and the heuristic is “given maximum acceleration, how long does it take to reach the goal”. As the distance to the goal isn’t known, an arbitrary (large) constant is used. In that sense, I’d say that the heuristic is not quite admissible (i.e. does not overestimate the cost to reach the goal), but because the overestimation is the same for the entire search space, this isn’t a problem. To avoid running into enemies or falling into gaps, a high penalty is added to the heuristic so that it does not get chosen as the next node by the search algorithm.

It’s stuff like this that got me into Computer Science and it’s stuff like this that keeps me here.

On The Presence of Markdown. Ash Furrow on John Gruber’s stance on Markdown:

What I find interesting about all this is that, until now, I didn’t really have a problem with Gruber having complete control over Markdown. I mean, it’d be unlikely that he would make a change that would adversely effect my ability to access (and write) things like this blog, right?

It’s now apparent that Gruber is willing to get Markdown stagnate as the Internet moves on.

Markdown will work just as well tomorrow as it does today. Because John Gruber doesn’t want to be beholden to a committee’s idea of what a Markdown spec ought to look like does not entail Markdown is going to break any time soon.

Markdown serves the agenda of John Gruber, and if he decides it’s in his interest to not update it, then he likely won’t. That doesn’t stop another software developer from building a successor.

It works great today and it will work great for many todays to come. But if ever a today should arrive when Markdown is insufficient, then surely some software developer should be capable of updating their content to fit. Writing in plain HTML in indignation will only make your job needlessly more difficult. Use what works while it works until it doesn’t. Then use your development skills to solve the problem if it ever arrives.

The Weaknesses of Objective-C. I could go on at length, but Rob Rix of Black Pixel does a sublime job:

Objective-C is a compromise by design, and it is utterly unembarrassed by this. It is, I think, a good compromise, finding a sweet spot where one has very convenient access to low-overhead constructs for performance (C and C++ can be linked in and even intermingled with ObjC) while still having a nice dynamic messaging system supporting flexible late-bound polymorphism.

It’s also a compromise from the ’80s. (Relatively) recent advances in functional programming (among other spheres) sometimes make me wonder if we could strike a better one today.

He details lots of its shortcomings which I agree with fully. It would be really awesome to rebuild the same semantics and syntax of the language without all the C baggage.

iPad mini Retina Display Math. Thomas Verschoren musing about a future Retina iPad mini if it used the same 326dpi display as the Retina iPhones:

The downside: the iPad mini with a 326dpi display have a far better retina quality than the iPad at 264dpi. Selling the retina iPad as retina wouldn’t really be justifiable I think.

I don’t really think this would be a deal-breaker. It looks like iPad mini is going to become the de facto iPad, and so if Apple can sell more of those than the bigger iPad, I’m guessing they will. Why not make your most popular iPad also the best iPad?

Random Shopper. Awesome project by Darius Kazemi:

How about I build something that buys me things completely at random? Something that just… fills my life with crap? How would these purchases make me feel? Would they actually be any less meaningful than the crap I buy myself on a regular basis anyway?

So I built Amazon Random Shopper. Every time I run it, I give it a set budget, say $50. It grabs a random word from the Wordnik API, then runs an Amazon search based on that word. It then looks for every paperback book, CD, and DVD in the results list, and buys the first thing that’s under budget. If it found a CD for $10, then the new budget is $40, and it does another random word search and starts all over, continuing until it runs out of money, or it searches a set number of times.

Modern Cocoa Text and a Shell Written for Cocoa

I'm currently in the process of preparing an excellent internal tool developed at Shopify for open source. It's an iOS debugger tool and it's pretty excellent. Those who have seen it have loved it. Soon, we'll be sharing it with the world.

In preparation, today I open sourced two projects developed independently of the tool, but which are used to support it.

The first is Modern Cocoa Text, a Cocoa class for either OS X or iOS which enhances text entry. Just about every app has some form of text entry, but very little has changed in how that works over the past 30 years. We still type out every single character. Shouldn't the computer be smarter and help us out a little bit? A few months ago, I wrote a wishlist for what I hoped to see added to iOS:

If I enter the “ character, you can be sure I'm going to want an ending ” character later on, to end my quotation. Lots of word processors and programming text editors will add the closing quote mark automatically, and then place the cursor in between. If, as is habitual for many typists not used to this nicety, I type an end-quote anyway, skip it and move my cursor for me. The same goes for other character pairs like parentheses (), braces {}, and brackets [].

So Modern Cocoa Text is me putting my code where my mouth is. It's simple code that really makes text entry a lot smarter. If you're a developer, I'd recommend you add it to your app. It's not perfect yet, but pull requests are welcomed.

The second project is JBShellView which leverages the above text functionalities to provide a generic and reusable shell interface for Cocoa apps. It's based off NSTextView and bears a little inspiration (at least getting started) from F-Script's Cocoa shell (although this one wasn't written in 1999).

The demo application for the shell lets you search using duck duck go, just to show you how the shell works when handling network tasks. It's super easy to use in new projects, and can be subclassed, too.

Don’t Kill Time

On my first birthday, the 365 days which made up the preceding year counted for 100% of my life. When I turned two years old, that same period counted for just 50% of my life. When I turned ten, the previous year was a mere 10% of my life, and so on. The years go by faster and faster because they represent a smaller and smaller portion of my entire life.

I’d love to live until I’m one hundred years old, a cool seventy-six years from now, and I know the intervening trips around the Sun will seem faster and faster still.

When I get to that age and look back on the difference, I can’t imagine wishing that I’d had “less time” than what I’d actually experienced. Of course, no matter how full of a life I’ll live, I’m certain I’ll always had wished for more time.

So it’s with that in mind I think recent trends of “killing time” have really felt so strange to me. There are moments in my life where I get bored, like waiting in a line or riding the bus, and in those times, I sometimes feel the urge to divert myself with my phone or computer. It could be something new like Twitter. It could be something timeless like listening to music. It could even be reading an article or book. But whenever I’ve felt an urge for one, I’ve felt a stronger urge to avoid them.

Any of these activities are a good way to pass those in-between moments, those crumbs of a day, and get me through to a bigger, meatier morsel of time. They’re a way to kill time, but why would I want to kill time? Time is precious and limited and can never be truly gotten back. I’ll become a wrinkly old sod before I know it, I’d rather not accelerate that plan and miss any of the life on the way. Those crumbs may be tiny, but can be filling when put together.

I want to emphasize I have nothing against Twitter, listening to music, podcasts, or reading. All are excellent tools which serve their own purpose of entertainment, enlightenment, or information. All are important, but turning to them for the sole purpose of killing time seems perverse to me.

So instead of skipping like a stone over the watery surface of life every time I have a spare moment, I want to instead plunge into the water and soak it all in. I want to have that time, however boring it may seem, to spend my way, with my thoughts.

We all want more time, so why do we try to kill it?

Undercity. While we’re on the topic of videos, here’s an incredible 30 minute documentary on Vimeo chronicling the adventures of Steve Duncan as he and the director explore the forgotten — and breathtaking — underground systems of New York City:

Steve and I just completed another underground expedition with Norwegian explorer Erling Kagge. It was featured in a three page article on the front page of the NY Times metro section and was written by Alan Feuer. We were also covered by NPR’s Jacki Lyden whose report will be aired on ½/11 and posted on NPR’s site.

The video is fascinating and kind of gives you the feeling of being in one of those dreams where you’re inside somebody else’s house, and you know you shouldn’t be, so you’re struggling to not get caught. There’s probably a word for that, but I can’t find one suitable. This documentary, however, embodies the feeling.

The World Is Ideas

I was talking with a friend the other day about the technical world we live in, where it’s been improved, and where progress has been slow. We talked about moments when we each had seen the state of something in the world and realized it could be better, and we’ve both had a “I can’t be the only one who’s thought that” moment.

But it’s occurred to me that’s possible. It’s possible I might have been the only one to have ever thought about something. There may be over seven billion humans on this planet, each with their own fully functioning mind, each full of ideas and curiosity, but I (like each of these individuals) have my own view of the world. Every single person sees the world through their own lens. Each person has their own thoughts and memories, their own culture which has been carved out of the world by them every moment of their life.

My culture is very much unique to me, as are my thoughts. The events of my life affect the way I think and give me a different perspective on the world. I could, for example, learn of a brand new niche technology that only applies to iOS Engineers, and through my unique experience, I might have a specific idea based on that new technology. I might very well be the only one to have thought of it.

It seems unlikely, because often these ideas seem so trivial to us, but the reason why they seem so trivial is due to our individual and precise culture. They are obvious to me because I might be the only one with the precise perspective required to see them. That concept, the reason why these ideas are obvious, is obvious to me.

There’s a reason why we feel we can’t be the only one to have had these ideas, and it’s the same reason why when faced with them, we’re often unlikely to act, and that’s because despite our finely chiseled personal culture, we also exist in a greater continuum of culture. We live in a rainforest ecosystem of culture. It’s hot and humid air in our lungs and cool soil between our toes. It’s a network of vines and ideas. It’s danger and it’s shelter all at the same time. The greater culture is a support system in which we all participate, and if we rely on it too much, then we don’t let our own ideas sprout up and grow above the canopy.

Maria Popova shares a similar view on the subject in an interview by The Great Discontent:

Don’t let other people’s ideas of success and good or meaningful work filter your perception of what you want to do. Listen to your heart and mind’s purpose; keep listening to that and even when the “shoulds” get really loud, try to stay in touch with what you hear within yourself.

It’s not so much that other people are being nefarious, but naturally everybody wants to believe their idea of “success” is “correct”. But the important things I’ve come to realize are:

  1. Every single thing I see around me was conceived, created, and made by somebody no smarter or dumber than me. Everything we see came from some human’s mind.
  2. There are no instructions; there are no rules. Everybody is living life and making it up as they go along.

Wired recently published an article about executives mimicking Steve Jobs, and this unrelated tangent stuck out to me (emphasis mine):

Ironically, in Jobs’ remarkable story of self-creation we can see why the rest of us are so hungry for a role model to light our own paths. Whether it was in the early days, when he manipulated Steve Wozniak into building products for him to sell, or later in his career, when he was struggling to shape NeXT from scratch, or even after returning to Apple, when he created entirely new products, Jobs had no one to tell him how to realize his vision. He made high-stakes decisions on his own, with little to rely on besides his well-honed intuition. And on a smaller scale, isn’t that true of us all? In life, as in business, there really aren’t any concrete answers or clear guides. We can’t help but see a biography like Steve Jobs as a rare road map to the uncharted world we awake to every morning.

We’re all flying by the seats of our pants. Every one of your ideas is just as important as every one of my ideas. There’s no secret guidebook some of us are privy to. Next time you wonder if you’re the only person to think of something, consider the possibility that you are. And consider how you might improve the world if you do.

Retina MacBook Pro Review

Simply put: The 15 inch Retina MacBook Pro is the best computing device for those serious about using computers currently available. It isn’t perfect, but it’s by far the best choice.

The screen is resplendent, with bright colours that pop. And of course, as a Retina display, everything is sharp. When the iPad with Retina display came out earlier this year, I felt it looked great, but it didn’t feel stunning at first. In the meantime, I’ve come to feel mostly the same way about it. But the new MacBook Pro is on another level.

Perhaps it’s because I’m coming from a 2010 MacBook Pro, 15 inch, where the screen brightness wasn’t as great as an iOS device, so this new Retina model is just leaps and bounds better and more pleasurable by comparison. Whatever the case, the display is simply gorgeous.

When you hear about the increased clarity and contrast ratios, they sound impressive, but nothing you couldn’t live without. But once you start using the device as your fulltime computing device, the effect is sublime. Sharp text. Everywhere. Sharp text for browsing, reading and writing letters and articles. Sharp text for reading and writing code. It makes for an unspeakable difference.

It makes the whole interface of OS X feel brand new again. It’s an interface many of us have used for the better part of a decade, and even though it’s evolved in the meantime, it’s still always felt to me like it’s the older brother of iOS, with more of its flaws and fewer of its strengths. But now that same interface becomes much clearer, and I feel like I’m now seeing it, for the first time, the way it’s always been meant to be seen.

All of the applications I use on a daily basis were updated by the time I started using this machine in late September 2012 (the machine itself, the first Mac with a Retina display, came out in June of this year), including Safari and Chrome, Xcode, Chocolat, Byword. Because most applications on the Mac are heavily textual, most apps didn’t even need much of a change to be fully Retina-ready.

The Web is the only place where I ever really notice there is non-Retina-ready content still around, but aside from image-heavy websites, it’s not really a big problem. The simple fact is, textual content looks so gorgeous in either browser that a few blurry images here and there don’t really detract from the experience. Sure, a non-Retina image appears worse on a Retina display than a non-Retina image appears on a non-Retina display, but I think for the most part that’s a perception issue. Most of the websites I care about support it fully, and those which don’t aren’t really that big of a deal anyway.

The Rest of the Story

The display is the paramount feature of this computing device, but it’s not the only thing that makes it better than all the rest.

The device feels like a more idealized MacBook Pro. The Superdrive has been subtracted from the device, allowing it to become much slimmer. The spinning rust hard-drive has been replaced with a solid state drive, making the computer not only much peppier, but also a great deal quieter. The fans very seldom ever kick in, and if they do, they’re mostly unheard. Combined with the removal of the device’s signature “breathing sleep LED”, with the lid closed it’s pretty hard to tell when the device actually enters sleep mode — there’s no change in sound, no indicator light, and no physical movement from inside its belly.

I’ve never used a MacBook Air as my fulltime device, but I can say as an owner of previous generations of MacBook Pros this one is way slimmer and lighter. I’ve found the 13 inch models (either the Air or the Pro) to have far too small a screen for my tastes. I can’t say whether MacBook Air owners would feel this computer to be too bulky, but once you use it for a while, the screen will likely change your mind.

The computer runs much cooler than any MacBook Pro I’ve ever used. In the past, there was no way I could use the device on my lap as it was far too hot for my legs. This device has no such issues.

The Battery

Battery life on this machine has reached “I don’t care” status. It lasts a long time, probably a solid 6 hours of completely regular unplugged usage. I have no idea how long it actually lasts, just that you can use it unplugged and not have to think about it any more. And it charges up really quickly too. I’m used to the Retina iPad taking four or more hours to fully charge up, but the MacBook Pro seems to charge much faster.

If you use it heavily every day, you’ll still need to charge it once per day, but if you use it lightly, you can probably get a good week or so out of it without needing to recharge. It’s how computers should work.

The Bumps

There are of course a few bumps in the road for this first Retina Mac. Scrolling large image-intensive web pages causes performance hiccups. Safari on Mountain Lion, which came out in July of this year still has tons of graphical glitches (I’m told this isn’t a problem relegated to the Retina devices). And on occasion the device will just totally crash. It’s probably a kernel panic, but without displaying the usual kernel panic screen. It’ll just freeze and loop whatever audio is playing. The device has to then be restarted.

These problems, however, are infrequent. I could look at the problems and list all the reasons why this device might annoy you, but I don’t think they really matter compared to the joys it brings in other ways. Bugs are inevitable, but suffering because of them is a choice.

The Result

This is the best computing device you can buy right now. It’s gorgeous and makes doing reading and writing, watching movies, listening to music, or whatever else your job or personal life entails, much much better. It’s a thoroughly enjoyable computer. It’s expensive, but worth it.

On Bret Victor's Magic Ink

There’s an incredible amount of astonishing, illuminating, and important information in Bret Victor’s 2006 Magic Ink paper. Over the course of some 26 000 words, Bret paints for us the current anachronistic state of software design (or lack thereof), why interaction is considered hazardous, and how little thought is put into how a person interacts with software.

He then offers a different perspective, a way of viewing the majority of software as a graphic design problem, and how that might be implemented.

Throughout the paper, he discusses and builds a conceptual framework for how true software design could be performed, and what a system built around this might be like.

Merlin had it easy—raising Stonehenge was a mere engineering challenge. He slung some weighty stones, to be sure, but their placement had only to please a subterranean audience whose interest in the matter was rapidly decomposing. The dead are notoriously unpicky.

Today’s software magicians carry a burden heavier than 13-foot monoliths—communication with the living. They often approach this challenge like Geppetto’s fairy—attempting to instill the spark of life into a mechanical contraption, to create a Real Boy. Instead, their vivified creations often resemble those of Frankenstein—helpless, unhelpful, maddeningly stupid, and prone to accidental destruction.

This is a software crisis, and it isn’t news. For decades, the usability pundits have devoted vim and vitriol to a crusade against frustrating interfaces. Reasoning that the cure for unfriendly software is to make software friendlier, they have rallied under the banner of “interaction design,” spreading the gospel of friendly, usable interactivity to all who would listen.

Yet, software has remained frustrating, and as the importance of software to society has grown, so too has the crisis. The crusade marches on, with believers rarely questioning the sacred premise—that software must be interactive in the first place. That software is meant to be “used.”

I suggest that the root of the software crisis is an identity crisis—an unclear understanding of what the medium actually is, and what it’s for. Perhaps the spark of life is misdirected magic.

Bret argues the majority of the time, we’re better off not interacting with software, because interaction is mechanical, and instead what we want is to solve an information problem. Engineers have defaulted to the mechanical because we:

  1. Are accustomed to it and can do so deftly, and
  2. Are solving entirely wrong problems. Our software isn’t answering what the user is asking.

Bret elaborates:

Much current software fulfilling these needs presents mechanical metaphors and objects to manipulate, but this is deceiving. People using this software do not care about these artificial objects; they care about seeing information and understanding choices—manipulating a model in their heads.

For example, consider calendar or datebook software. Many current designs center around manipulating a database of “appointments,” but is this really what a calendar is for? To me, it is about combining, correlating, and visualizing a vast collection of information. I want to understand what I have planned for tonight, what my friends have planned, what’s going on downtown, what’s showing when at the movie theater, how late the pizza place is open, and which days they are closed. I want to see my pattern of working late before milestones, and how that extrapolates to future milestones. I want to see how all of this information interrelates, make connections, and ultimately make a decision about what to do when. Entering a dentist appointment is just a tedious minor detail, and would even be unnecessary if the software could figure it out from my dentist’s confirmation email. My goal in using calendar software to ask and answer questions about what to do when, compare my options, and come to a decision.

There are countless other paragraphs I’d love to pull from the paper, but you’d really be better suited to read it yourself. Though the paper is lengthy, it’s extremely well-written. Bret explores concepts fully without belabouring his argument.

It took me about five lengthy sessions to digest the essay (which is par for the course of a Victor essay) but it was worth every second. And, the paper reads exceptionally well in Instapaper on an iPad.

The ideas presented in this paper are challenging, both to an Engineer living in 2012 who’s been trained with paradigms from 1984, and to an Engineer who sees how daunting some of these challenges might be to implement. But this Engineer sees that as an important challenge, as a way to move the state of the art forever forward into a realm of new possibilities. Bret’s epilogue, a quote from Richard Hamming sums it up well:

In the early days, I was solving one problem after another after another; a fair number were successful and there were a few failures. I went home one Friday after finishing a problem, and curiously enough I wasn’t happy; I was depressed. I could see life being a long sequence of one problem after another after another. After quite a while of thinking I decided, “No, I should be in the mass production of a variable product. I should be concerned with all of next year’s problems, not just the one in front of my face.” By changing the question I still got the same kind of results or better, but I changed things and did important work. I attacked the major problem—How do I conquer machines and do all of next year’s problems when I don’t know what they are going to be? How do I prepare for it? How do I do this one so I’ll be on top of it? How do I obey Newton’s rule? He said, “If I have seen further than others, it is because I’ve stood on the shoulders of giants.” These days we stand on each other’s feet!

You should do your job in such a fashion that others can build on top of it, so they will indeed say, “Yes, I’ve stood on so and so’s shoulders and I saw further.” The essence of science is cumulative. By changing a problem slightly you can often do great work rather than merely good work. Instead of attacking isolated problems, I made the resolution that I would never again solve an isolated problem except as characteristic of a class.

iTunes 11 and Colors. Panic:

iTunes 11 is a radical departure from previous versions and nothing illustrates this more than the new album display mode. The headlining feature of this display is the new view style that visually matches the track listing to the album’s cover art. The result is an attractive display of textual information that seamlessly integrates with the album’s artwork.

After using iTunes for a day I wondered just how hard it would be to mimic this functionality — use a source image to create a themed image/text display.

We use a similar technique for some of our apps at Shopify and it’s a great effect. This is one of those useful bits of source code sorcery that really ought to be used in a lot of places.

Kurt Vonnegut's Semicolon Rule. Neven Mrgan:

This line by Kurt Vonnegut, one of my favorite writers, often gets quoted by would-be writers and literary types:

“Here is a lesson in creative writing. First rule: Do not use semicolons. They are transvestite hermaphrodites representing absolutely nothing. All they do is show you’ve been to college.”

This sort of exaggeratedly arbitrary, nose-thumbingly subjective opinion is exactly what we love in lovable writers, but it is also the exact sort of thing we should develop in ourselves, not mimic (or worse, throw as a “rule” at others when they fail to comply).

This quote is from the wonderful and hilarious “A Man Without a Country”, and the point of this edict is that it is in jest. One of the central themes of his work is questioning authority, in particular in thinking for yourself.

So it’s not about this particular rule, but that a proscription of any sort should be considered and most importantly, challenged.

A Better Summary of Bret Victor's Magic Ink. Steven Chung:

Written by Bret Victor during his time at apple, Victor begins his argument stating that most software nowadays is information software, which helps people learn - contrary to manipulation software (helping one to create, such as Photoshop) or communication software (the combination of information and manipulation software, like email). From personal finance software to buying a movie online, we use software mostly to learn - compare options, answer questions and discover something. Sometimes we learn to make a decision, but acting on this decision is relatively minor. For example, most of Amazon is built to help users understand and compare options to decide what to buy. To ease the transaction, the actual buying behavior is a brief part of the site - small when compared to the rest. Victor gives many examples. When abstracted, we generally use software to learn. In other words, most software is information software.

However, when we create software, most software designers feel they are designing a machine - focusing on what software does. What can it do? How will users do it? What screens are there? The focus is functionality, but the core of information software is presentation. Victor suggests that information software should start as and primarily be a graphic design project - focusing on what and how the information is presented. What information is relevant? What decision is the user making? How can the information be presented to direct the user's eyes to the answer? Designs should begin with how the software looks because we use software to learn and we learn by seeing.

If you create software, you should really read Magic Ink.

Project Godus. Peter Molyneux's new god-game:

As a god your destiny is to spread your influence throughout the world. But you are only as powerful as your following. Using your ability to sculpt the land, you can create a habitable environment that will allow a population of believers to flourish.

The prototype video is kind of depressing. In the game, you basically just re-enact the shitty parts of civilization over and over again?

Killing in the name of god is not my idea of fun.

An Interesting Idea for Flickr

There’s a good chance when I decide to upload an album of digital photos I’ve got many candidates of the same shot. I usually compare them back and forth until I can determine which one I think is best, but I’m not a professional photographer and I have little concept of how “interesting” any of the candidates might be.

But Flickr does.

Flickr is known for having its secret “Interestingness” algorithm which it uses to determine how interesting a photo is. It applies this algorithm to photos and if a certain threshold is exceeded for a shot, that photo is highlighted in Flickr’s Explore page.

Although their precise definition of an “interesting” photo is secret, we can glean a little bit of what Flickr thinks is interesting from their patent application for the feature (emphasis mine):

Media objects, such as images or soundtracks, may be ranked according to a new class of metrics known as “interestingness.” These rankings may be based at least in part on the quantity of user-entered metadata concerning the media object, the number of users who have assigned metadata to the media object, access patterns related to the media object, and/or a lapse of time related to the media object.

This leads me to believe the algorithm looks at both metadata features inherent in the photo like EXIF data, probably things like exposure, depth of field, etc., and social network metadata like how many times an image has been Favourited, added to a group, or commented on, and uses this in a recommendation system like Amazon’s “Collaborative Filtering”.

Even if that’s not precisely how Flickr does it, there’s a good chance such a mix would work well anyway. So with this algorithm, why don’t they help users decide which photos to upload?

Flickr’s algorithm would work really well here. I’d upload reduced versions of all candidate photos for my album, and Flickr would analyze them, returning to me the set it’s determined are the most interesting. If Flickr can’t determine which photos are candidates for the same shot (although it should be able to determine this based on contents and time stamps) then it could let me group them together and it can pick the best of that group.

This way, I can then upload the photos Flickr thinks are the best and I put my best foot forward for my photos. This isn’t about me getting more “Favs” or comments on my photos, this is just about making use of an excellent algorithm to help me, a non-expert, determine which of my photos are probably the best, because I’m not so great at doing it myself. Over time, this might help reveal patterns to me on how to take better shots (Flickr could also help educate users if that was their concern).

If Flickr has an algorithm to do this, why do I still have to do it manually?

I felt this applied best to Flickr because they’re in the mid-range of Photography websites. Facebook is a dumping ground for all your photos and 500 Pixels is generally used by professional photographers who already have a better sense of which photos work best, but they might stand to benefit from this method, too. Ultimately, I’m a Flickr user and they’ve already got the algorithm.

Why Do We Have Regular Doctor and Dentist Checkups But Not Mental Health Checkups? [Reddit Thread]. An interesting Ask Reddit thread from last week:

I find it strange that we never hear of healthy people going for psychiatric evaluations on a yearly basis. I mean people can have mental problems and not know, it as with the rest of their body.

It’s an idea I’d never thought of before but I think it would be a good thing.

Also, according to the comments in the thread (so I’m not treating that as fact), the premise that there’s apparently a big stigma against mental health in the United States is unfortunate (if true). I believe mental health is as essential to the well-being of a person as is their physical health. There’s no reason to be ashamed of either.

Tenth Grade Tech Trends. Josh Miller asks his tenth grade sister about tech startups:

A few months ago, my fifteen-year-old sister told me that Snapchat was going to be the next Instagram. Many months before that she told me that Instagram was being used by her peers as much as Facebook. Both times I snickered.

Learning from past mistakes, I took some time over the holiday break to ask my sister many, many questions about how her and her friends are using technology. Below I’ve shared some of the more interesting observations about Instragram, Facebook, Instant Messaging, Snapchat, Tumblr, Twitter, and FaceTime. I hope you’ll find them as informative, surprising, and humbling as I did.

The most revealing bit is about Twitter:

She had almost nothing to say about Twitter because she didn’t know anyone in high school that used it. “Nobody uses it. I know you love it but I don’t get it. I mean, I guess a a few kids use it but they’re all the ones who won’t shut up in class, who always think they have something important to say.” (Note: that was me in high school, unfortunately.)

For me, Twitter is predominantly a link discovery service — admittedly, that is a simplified view, but it’s helpful for these purposes — so I followed-up on her Twitter comments by asking where she discovers links. “What do you mean?” She couldn’t even understand what I was asking. I rephrased the question: “What links do you read? What sites do they come from? What blogs?”

I don’t read links. I don’t read blogs. I don’t know. You mean like funny videos on Facebook? Sometimes people post funny links there. But I’m not really interested in anything yet, like you are.

Josh makes the conclusion that this is potential market for Twitter but I see it as more of a sign that most of “the real world” hasn’t entirely caught up to the internet people yet. I don’t mean that in the sense that we’re ahead, but more so that our uses for Twitter is more forward thinking as we’re more used to thinking globally.

Think about how Twitter is used differently from the other services she talked about. All the other services were about communicating with people in her closer geographical location. Facebook, Instagram, Tumblr, were almost all used by people she knew personally, whereas Josh’s description of Twitter is more about people on a global scale. I feel like I know many of the people I follow on Twitter, but the fact remains I’ve never met most of them in person.

I believe with more advances in communication technologies and services this will eventually become normal for non-technical people. But in the meantime, we still live in a world where socialization is still for the most part geographically bound.

Why We Won't Have Tablet-Native Journalism. Felix Salmon:

Subcompact publishing helps in terms of making great writing immersive: there are no distractions, just text (and maybe the occasional link or illustration) on a white background. Once you get lost in the story, the medium becomes invisible, just like all great storytellers should. It’s taking journalism and doing to it much the same thing that Readability does, or Apple’s “Reader” button in Safari. But when all you have is text, the journalism itself isn’t really tablet-native: it doesn’t shape itself to the contours of the medium in the way that radio journalism does to radio, or TV journalism does to TV, or tabloid-magazine journalism does to tabloid magazines. You’re basically left with a high-tech means of reading the kind of thing which could have been written centuries ago.

When journalism made the jump onto television, it was obvious that having visual content was a good use of the technology to tell the story better. So far, we don’t have many examples of why adding interaction to digital journalism enhances the story. That’s not to say that adding interaction can’t enhance the story, just that so far I feel like we haven’t really figured it out yet.

Grafting on interaction to journalism just because the medium permits it is a disservice to the story. There’s no point in adding it if it doesn’t improve the story or the reader’s comprehension of the story.

Sins of a Modern Objective C Developer Community

My friend Ash Furrow recently published an article entitled “Seven Deadly Sins of Modern Objective C” in which he lists grievances committed by programmers new and experienced alike who use outdated or incorrect methods of Objective C development. This article struck a chord with me, but not for good reasons.

The article begins with the bellicose proclamation:

If you code Objective-C, this is going to offend you and that’s good. If you aren’t offended, then you don’t care, and that’s bad.

I disagree with both statements and the conclusion. A list of common incorrect or outdated patterns of Objective C should make for an enlightening and educational read — it should not be looking to pick a fight. The original version of the article, which was painfully and profusely peppered with profanity has since been revised with less reviling language, but the harangue remains otherwise intact.

The so-called “sins” are hardly egregious, and few of them relate directly to Objective C anyway. Properly ordered, they are:

  1. Giant .xib files. These are interface files and not part of the language.

    Nonetheless, many novices will use this technique, which will use too much memory and suffer from slow loading times when the xib is read in. Experienced Cocoa developers know this already, but new programmers are probably not aware creating different views in the same file is hazardous. However, explaining this in an “offensive” way doesn’t help anyone.

  2. Not Using Dot Syntax. This sin has to do with a syntax introduced with declared properties in the Objective C language. He writes:

    Now that we’ve covered a sin common with newbies, let’s tackle one that’s common with Objective-C greybeards.

    Tossing out “Greybeards” is rarely a successful method of much more than grabbing the attention of a developer who likely knows a lot about a programming language, and it sure doesn’t encourage said programmer to pay much attention to any forthcoming arguments.

    Get with it, old timers! Dot syntax isn’t just The Way Of The Future, but it has other benefits, too, like not alienating all your peers and great compatibility with ARC (what’s that? You don’t use ARC? Jesus…).

    Again, this does nothing but inflame when instead the intent should be to inform. Just telling an experienced programmer this is “wrong” is a dogmatic solution. Any experienced programmer will immediately respond with that lovely three-letter word we need to hear more of: “Why?” The article provides no answer for that question.

    I agree with using dot-syntax for properties (which I’m assuming Ash is also advocating, as opposed to using dot-syntax for any old method), and when I explain this to other developers, I use Brent Simmons’ line of reasoning:

    Yes, I know it doesn’t matter to the compiled code, but I like having the conceptual difference, and the syntax reinforces that difference.

    And while you might not like dot notation — or you might love it and want to use it for things like count that are not properties — I ask you to remember that cool thing about Cocoa where we care about readability and common conventions. […]

    Say I’m searching a .m file to find out where a UIImageView gets its image set. Knowing that image is a property, I search on .image = to find out where it gets set.

    If I find nothing I start to freak out because it doesn’t make any sense. […] I know that image view displays an image, and I know the image is set somewhere in that file — and I can’t figure out where.

    And then, after wasting time and brain cells, I remember to search on setImage:. There it is.

    Brent says not only what he thinks you should do, but why he thinks you should do it too.

  3. Too Many Classes in .m Files. This one is pretty straightforward but I also feel like this isn’t something a beginner will do too often, as the standard Xcode behaviour is to generate two files (*sigh*) for every class. It’s possible that this is a common beginner problem but I haven’t witnessed it. Either way, Ash is right here. Jamming extra classes in an implementation file is not OK unless those classes are helper classes to the eponymous file class.

  4. Not Testing With Compiler Optimizations. Agreed, but I don’t feel like this is a common problem because in the general case, you develop under the assumption these optimizations are not introducing bugs. That’s the agreement you make with the compiler. But if you’re getting crashes in your released version, this is a great place to look for mistakes.

  5. Architecture-Dependent Primitive Types. The simple truth is, despite all the stylistic implication, the modern Cocoa APIs use these types, so if you don’t use these types you’re risking a loss of precision and you’re guaranteeing yourself a loss of abstraction.

  6. Unnecessarily C APIs. This sounds like more of an issue with Apple’s code (and I agree) than it is with other third-party code.

    Listen, guys, it’s the twenty first century. Don’t use C anymore unless you have to. It’s rude.

    But why? I would say avoid creating new C code in Objective C projects because function calls are less expressive of their meaning (see a previous post about the advantages of Objective C-style syntax). A message send gets compiled into an objc_msgSend()-like C call, anyway.

  7. Not Using Automated Tests. Finally, an accusatory rant on why you don’t test properly:

    Do you unit test your Objective-C? No, you probably don’t. Do you have automated UI acceptance tests for your UI? Nope. Do you have any kind of continuous integration set up? No, you don’t.

    I don’t understand what is wrong with the Objective-C community that it continuously eschews any form of automated testing. It’s a serious, systemic problem. I think I might know the cause.

    It’s neither useful nor encouraging to berate a group of programmers about any of these “sins”, but this one is more troubling than the others because Ash goes on to rationalize (likely correctly so) why many Cocoa developers don’t unit test: because our tools are crummy.

    I’ll agree the tools are crummy, and in part because of this our community does not tend to properly automate or unit test our code. But shaming isn’t the path to fostering better methodologies and habits. Shaming leads to dissent and applies a hierarchy over our community. It leads to superiority and inferiority complexes which further divide our fellow members and discourages open discussions for improvements.

I don’t mean to pick on Ash, because he’s a friend of mine and a damn intelligent (and charming) one, at that. But I do mean to pick on a trend of internet snickery, a lambasting of fellow developers with the surface intent of “criticism”, but which really comes out as harsh or just plain mean behaviour that doesn’t help anyone improve what they’re working on.

My point isn’t just that it’s rude to be inflammatory, or that being inflammatory is a cheap way to wrack up page views, or that being inflammatory reflects poorly on the author. It’s not helpful to beginners because it makes them feel stupid and it gives off an elitist vibe from the author. It’s also not helpful for experienced developers, who are confident in their skills but who could use some updating in their habits, because the inflammatory style is used in stead of providing actual reasoning of why these sins are something to be avoided. Being cussed out about bad habits doesn’t convince anybody of anything.

We developers are notoriously bad at socializing and I believe it’s one of the things severely keeping our craft much farther behind than it ought to be. We’re wonderfully intelligent people, but we’re nearly incapable of having open discussions to improve our craft. The idea of actual, constructive criticism seems farfetched to most developers, and it’s so much easier to shit on somebody else’s work. That’s not how progress is made and I believe this is one of the reasons we’ve been collectively spinning our tires since 1984 (see also Gabe at Macdrifter who shares similar thoughts on Daring Fireball-esque impersonations).

Next time you feel like mocking another developer for his or her lack of skills in a certain area, stop yourself. Talk to the person and explain to them why you think something they are doing is incorrect or how it could be done better. This way, you don’t come off as a jerk, and the other developer learns something new and improves. And I’d be surprised if you didn’t learn something from the experience, too.

One Thought for A New Year. Scott Stevenson:

Every single thing in my life that has made me truly happy — I only got there by trusting myself and ignoring everyone else, even when it seemed insane. I can’t tell you how many times this has paid off for me. It often pays off immediately, within an hour. If I just trust what feels right, everything seems to fall into place magically.

I absolutely agree with this advice.

How To Recover Deleted Dailybooth Photos

I helped someone solve this tonight. Since DailyBooth.com shut down at the end of 2012, you can’t get at any photos. But if you needed to recover some here’s how (this trick works as of January 2, 2013):

Visit http://m.dailybooth.com/USER_NAME (except put your user name in the URL) and you should be able to see all your photos and grab them. I don’t know how long this trick will last, but that’s how I did it.

Also, some of the photos seemed to not want to download at first, so you need to them in Chrome, open the image in its own tab, and then use Save As to get the image to save. I don’t know why.

Modern Cocoa Memory Management

Objective-C in the Cocoa and Cocoa touch environment has always had one particular source of plight for newcomers in the realm of memory management. In the olden days, we Cocoa programmers had Reference Counting, a form of manual memory management, and though the rules are simple, they were also hard to master and easy to screw up. After a brief and half-hearted stint at using fully managed memory in the form of OS X’s Garbage Collector, Apple has now deprecated the technology (which they never could get running well enough on iOS).

These days, we have Automatic Reference Counting (ARC) on both platforms, which is somewhere in between. In this article, I will explain the fundamentals of Cocoa memory management, what you must know and what can be left to the tools.

Smells like Garbage Collection

At first glance, ARC seems an awful lot like Garbage Collection: it is automatic after all. But despite that specious assumption, ARC is in fact quite different from GC. ARC is a compile time technology, which means there is no collector running with your app’s process, and this saves on performance. But it also means ARC isn’t as capable as a full garbage collector.

The hint is in the name. If you look closely, it’s Automatic Reference Counting, not Automatic Memory Management.

In essence, this means ARC doesn’t relieve you the programmer from knowing the memory management rules, it only relieves you from writing memory management code. It doesn’t mean ARC is hard, it just means you have still have to pay a little bit of attention instead of letting the system do all the work (as it ought to do). ARC is a compromise.


The key and fundamental principle of Cocoa’s memory management rules is and always has been about Ownership. Learn this principle and learn it well and ARC and Manual Reference Counting will make absolute and perfect sense.

Instead of a system process exploring the runtime’s object graph, Cocoa’s reference counting system relies on a compile time Ownership model to determine the lifetime of objects at runtime. It can be expressed in three simple axioms:

  1. An object will exist in memory so long (but no longer) as at least one object maintains ownership of it.
  2. To keep an object, you must take ownership of it. If you are done with an object, you must relinquish ownership of it.
  3. More than one object may share ownership of a given child object.

Ownership is acquired in one of the following ways:

  1. By allocating an object in memory by using any methods starting with -new, -alloc, or -copy.
  2. By requesting ownership of the object.

    In ARC, this is by assignment to a strong property or variable (object instance variables default to strong ownership under ARC, as they ought to).

    In MRC, this is by assignment to a retain property or by by sending the object the retain message (object instance variables don’t do any of this for you under Manual Memory Management, so if you wish to retain the object when setting it to an instance variable, be sure to send it retain).

If you do neither of these things, you don’t own the object and you should treat it as though it will go away after the scope in which it’s being used. That means you don’t have to do anything special to keep an object around for the duration of a method block, but unless you hang on to it explicitly, it will go away.

To relinquish ownership, all you have to do is remove the strong/retain reference to it (by nil-ing out the property) in ARC or MRC. Or by sending the object a release or autorelease message, in MRC only.

When and When Not

After mastering the concept of Ownership, the rest just falls naturally into place. To repeat from above, If you don’t claim ownership of an object, you can’t expect it to be around for any longer than its current scope. That’s the API contract you make with Cocoa’s memory management rules, whether ARC or MRC.

With that knowledge, you can thankfully let ARC take care of most of the rest (with one exception, as we’ll see later). Some examples of when you the programmer need to do work, and when you don’t:

- (void)someMethod {
    id localObject = [SomeClass new]; // creates ownership, but only for the method’s scope
    // ... do your stuff with localObject
    return; // ARC automatically relinquishes ownership of localObject for us, because our object didn’t take Ownership

- (void)setupState {
    // In the below case, we assign the object to a strong property, thus taking ownership.
    // Even though the -new method also comes with ownership, it’s local like the above example, so we get the intended behaviour of a single ownership. ARC figures it out for us.
    self.instanceProperty = [SomeClass new];
    // ... etc.

- (void)addToStateArray:(id)otherObject {
    [self.arrayProperty addObject:otherObject];
    // In this case, we don’t directly claim any ownership because our Array does that for us.

Where ARC is weak compared to GC

Garbage Collection provides the programmer with the contract that it will take complete control over managing memory in the application process, whereas ARC only makes the claim of relieving the programmer of writing ownership machinery. This means, unlike GC, ARC does not fully manage every aspect of process memory. Most importantly for Cocoa developers, this means ARC cannot break ownership cycles.

A cycle occurs when a parent object claims ownership of a child object who either likewise claims ownership to the parent or owns a descendent who claims ownership to the parent. It might look like the following, where -> means ownership:

A -> B -> C -> A

In such a scenario, keeping in mind the first and second axioms of Cocoa memory management, object A can never never be deallocated because there is an ownership cycle. C owns A, but can’t be deallocated because B owns C. And B can’t be deallocated because A owns B. And so on. To solve this, the programmer must take responsibility and use a weak reference.

A weak reference is just that: a reference to an object without claiming ownership of it. These also have the nice benefit of automatically being set to point to nil when the object at the other end disappears (I’m with Wolf Rentzsch on this one: if the runtime is capable of this, why didn’t they just go all the way and do real GC?). Here’s an example of solving an ownership cycle with weak, from the parent:

- (void)setChild:(Child *)child {
    self.child = child;

    // We assume the class Child has a property that looks like
    // @property (weak) id parent;
    self.child.parent = self;

If the Child class had a property that wasn’t denoted as weak, then it we would have an ownership cycle, but with weak, we can have a healthy object graph devoid of leaks or cycles.

ARC is a compromise

Cocoa memory management has always been a source of consternation for newcomers. Even though ARC aimed to solve that by taking more control over memory management, it’s not a full solution like Garbage Collection. In order to master it, you still must master the above concepts. But by internalizing the principles of Cocoa’s memory management, ARC takes care of the rest.

The Relatable Fallibility of Roddenberry’s Humanity in STNG. Carson Brown on depictions of humanity in Star Trek: The Next Generation:

I’ve started watching Star Trek: The Next Generation from the beginning again. Instead of just enjoying it as a nostalgic trip through my childhood, I’ve been trying to actively watch what’s happening in the show. I’ve noticed a number of personality and behaviour patterns that totally clash with how I remember each character as a child. Bear with me as I review what the first half of STNG’s first season was like a second time around.

Discoveries Made on an Aimlessly Whimsical Walk of the Web

Last night, while catching up on some old articles in my RSS feeds, I read a quote by Buzz Anderson:

The programmer, who needs clarity, who must talk all day to a machine that demands declarations, hunkers down into a low-grade annoyance. It is here that the stereotype of the programmer, sitting in a dim room, growling from behind Coke cans, has its origins. The disorder of the desk, the floor; the yellow Post-it notes everywhere; the whiteboards covered with scrawl: all this is the outward manifestation of the messiness of human thought. The messiness cannot go into the program; it piles up around the programmer.

This is a quote from Ellen Ullman’s Close to the Machine, which was highlighted by Martin McClellan on a service I’d never before heard of called Readmill.

Readmill is kind of like Goodreads, except it looks much nicer, more modern and has an emphasis on sharing passages from the books you’re reading with your friends:

Readmill is a curious community of readers, highlighting and sharing the books they love.

We believe reading should be an open and easily shareable experience. We built Readmill to help fix the somewhat broken world of ebooks, and create the best reading experience imaginable. Readmill launched in December 2011 with a small dedicated team from all over Europe. We are based in Berlin.

From the book’s Readmill page, I stumbled upon Nicole Jones’s profile (“nicoleslaw” is such a great username) and then discovered her website, “Swell Content” where she writes about writing, content strategy, and making the world a better place.

From there, I discovered Born Hungry Magazine, a yummy-looking cooking and eating website which she founded and contributes to. It describes itself as:

an online magazine about why we cook and the curiosity that drives us. With every feature and recipe, we want to celebrate and encourage home cooks.

We believe everyone can make a delicious meal (or cocktail, as you do). We’re a bunch of inquisitives: roasting, pickling, tasting, and sharing. And we want to publish things in our slow, quiet way to inspire you to do the same.

She also posted, around the time I started writing this article, a link to her page on The Pastry Box Project, a website which shares daily thoughts from a roster of thirty writers, one per day for a whole year:

Each year, The Pastry Box Project gathers 30 people who are each influential in their field and asks them to share thoughts regarding what they do. Those thoughts are then be published every day throughout the year at a rate of one per day, starting January 1st and ending December 31st. 2013’s topic is “Shaping The Web”.

It’s pretty neat, and here’s a more pensive description of the Project’s origins straight from the author, Alex Duloz:

The night following Ethan’s email coincided with a Madmen marathon. This show, probably one of the most subtle and well written ever aired to this day, often got me thinking about how interesting it would be to have direct access to the thoughts of 1960s ad executives, about their jobs, and what they were doing. Those people were simply defining a large portion of what their day and age was becoming (whether for good or bad, or worse) and I wanted to know if they were fully aware of the extent to which they were helping to shape the daily experience of millions of people, and, if so, how they felt about it. I had read some memoirs and some interviews, but those weren’t the raw material I was looking for, the right-now-in-the-heat kind of thinking.

Later, before falling asleep, thoughts of new projects, Madmen, and browsers being resized (I had spent a fair amount of the day testing the site) all mixed together.

And the Pastry Box Project took shape. Almost discreetly.

I realized I could gather the material I dreamed of while watching Madmen. I simply had to ask people to share their thoughts about their work, the industries they’re developing in, and themselves.

Sometimes I get the feeling like I’m missing the vast majority of the interesting content on the Web, and then I have days like today where that thought is confirmed. Here’s to discovery.

When Ideas Have Sex. Michael Shermer for Scientific American:

Sex evolved because the benefit of the diversity created through the intermixture of genomes outweighed the costs of engaging in it, and so we enjoy exchanging our genes with one another, and life is all the richer for it. Likewise ideas. “Exchange is to cultural evolution as sex is to biological evolution,” [zoologist Matt] Ridley writes, and “the more human beings diversified as consumers and specialized as producers, and the more they then exchanged, the better off they have been, are and will be. And the good news is that there is no inevitable end to this process. The more people are drawn into the global division of labour, the more people can specialize and exchange, the wealthier we will all be.”

I was discussing a similar idea tonight with a coworker. It’s one thing to bring brilliant people together, but it’s quite a great deal better if they are exchanging ideas. If companies had more regular discussion of ideas, both internally and with other companies, everyone could reap the benefits.

See also Matt Ridley’s excellent and convincing TED Talk on the same subject.

MVC is dead, it's time to MOVE on. Conrad Irwin:

I'm certainly not the first person to notice this, but the problem with MVC as given is that you end up stuffing too much code into your controllers, because you don't know where else to put it.

To fix this I've been using a new pattern: MOVE. Models, Operations, Views, and Events.

Interesting idea.

I'll Be Speaking at NSNorth 2013

What’s NSNorth? It’s a developer conference in Ottawa

Over the past few years, Apple has revolutionized how people use technology. App developers have access to an exciting ecosystem that continues to grow at an enormous rate. More than ever, we as designers, developers, and business leaders have the tools available to change the world.

Our goal is to bring together experts in a variety of important topics for three days to broaden your horizons, make you think differently, network with fellow devs and designers all while having a great time.

If you’re an iOS or Mac developer, you really ought to buy a ticket and do so quickly. The conference runs April 19-21, 2013 and the lineup looks fantastic (if I do say so myself).

Versu, The Interactive Story Book. Fascinating new interactive storytelling platform for iPad:

Versu is an interactive storytelling platform that builds experiences around characters and social interaction. Each story sets out a premise and some possible outcomes. As a player, you get to select a character, guide their choices, watch other characters react to what you've chosen, and accomplish (or fail at) your chosen goals.

Watch the video and imagine the kinds of stories you'd create. Who says the book is dead? It looks more alive than ever.

A Conversation with Will Wright. A lengthy and classic interview from 2001 with Will Wright, one of the brilliant minds behind Sim City and The Sims.

CP: Yes, I do. But one of things that interests me about the game is that you have these semi-autonomous characters. They’re not totally autonomous, and they’re not totally avatars either. They’re somewhere in between. Do think that’s disorienting to the player, or do you think it’s what makes the game fun?

WW: I don’t think so. I mean it’s interesting. I’m just surprised that people can do that fluidly, they can so fluidly say “Oh, I’m this guy, and then I’m going to do x, y, and z.” And then they can pop out and “Now I’m that person. I’m doing this that and the other. What’s he doing?” And so now he’s a third person to me, even though he was me a moment ago. I think that’s something we use a lot in our imaginations when we’re modeling things. We’ll put ourselves in somebody else’s point of view very specifically for a very short period of time. “Well, let’s see, if I were that person, I would probably do x, y, and z.” And then I kind of jump out of their head and then I’m me, talking to them, relating to them.

On SimHealth, an example of a tool for sharing a mental model amongst many people, a powerful concept:

[WW:] We did a project actually several years ago called Sim Health for the Markle Foundation in New York. It was a simulation of the national healthcare system, but underneath the whole thing, the assumptions of the model were exposed. And you could change your assumptions, for example, as to how many nurses it takes to staff a hospital, or how many emergency room visits you would have given certain parameters, etc., etc. The idea was that people could kind of argue over policy but eventually that argument would come down to the assumptions of the model. And this was a tool for them to actually get into the assumptions of the model. When people disagree over what policy we should be following, the disagreement flows out of a disagreement about their model of the world. The idea was that if people could come to a shared understanding or at least agree toward the model of the world, then they would be much more in agreement about the policy we should take.

A humourous example of a shared model:

WW: In Go, both players have a model of what’s happening on the board, and over time those models get closer and closer and closer together until the final score. At that point you have a total shared model of, you know, “you beat me.” (Laughter.) Up until that point, though, there’s quite a large divergence in the mental models that players have. Especially if you ask them what the score is, or “How are you doing?” They’ll frequently say, “I’m doing pretty well, here,” or “He’s whipping me.” Or that backwards thing, “Oh, he’s whipping me,” when really you’re the one winning. And it really comes down to how each person is mentally overlaying their territories onto this board. In each player’s mind, there’s this idea that “Oh, I control this and they control that, and we’re fighting over this.” They each have a map in their head of what’s going on, and those maps are in disagreement. And it’s those areas of maximum disagreement where the battles are all fought. You play a piece there, and I think “Oh, that’s in my territory, I’m going to attack it cause you’re in my territory.” Whereas you’re thinking, “Oh, that’s my territory, you’re invading me.” And finally, the battle resolves that in our heads, and then it’s pretty clear that, “Okay, that’s your territory and that’s mine.” So the game is in fact this process of us bringing our different mental models into agreement. Through battle.

And finally, something intriguing I’m not sure they ever shipped:

WW: (Laughs.) Yes. I’m trying to basically chronicle the average model that the players have made in their heads. It’s like cultural anthropology. Already it's having a huge impact on what we do with our expansion packs and the next version of The Sims. We’re getting a sense of when people like to play the house building game vs. the relationship game, and what types of families they like to create, what objects they like the most. Eventually, in the not too distant future, we’re working towards having this be dynamic on a daily basis so the game in some sense can be self-tuning to each individual player based on what they’ve done in the game. That’s what I think is going to be really interesting slash kind of scary[sic]. Because I can see a really clear path to getting there. You look at what a million people have done the day before in a game, have all that information sent up to your server, do some heavy data analysis, and then every day send back to all these games each with its own new tuning set.

CP: So this would be The Sims Online where everything is going on at the server level as opposed to individual machines.

WW: No, this could be for just the next version of The Sims.

CP: As long as you have a way of collecting the data from the people.

WW: Right, and they could easily opt out if they want to turn it off. But for the most part they could still be playing a single player game, it’s just that every time they boot it up it goes to our server and asks for the new tuning set. And when they finish playing every day it sends back the results of what they did. So they’re still playing a single-player game, but it’s individually tuning itself to each player. You know based on your preferences, but also based on the parallel learning of a million other people. So you might discover things. Or somebody might actually initiate a sequence of actions on their computer in a very creative way and the computer might recognize that, send it up to the server, and say: “Wow, that was an interesting sequence, and that person likes doing comedy romances. Let’s try that on ten other people tomorrow. If those ten people respond well, let’s try it on a hundred the next day.” So it could be that the things aren’t just randomly discovered, but they’re also observed from what the players did specifically.

Introducing the Brand New “Speed of Light”

The first thing you’ll notice is that you probably won’t notice anything at all. The website looks and works identically to how it’s always worked. For you, the reader. But for me, the author, the website has gone under a massive overhaul and has been re-written from the ground up to support all kinds of new goodies. Gone are the days of the jury-rig known as Colophon 1, the old website software I’d written to power this website.

Instead, with Colophon 2 I’ve got a proper web interface. No more publishing articles over git. Colophon 2 has a proper REST API, including a built in article editor (with autosave). I’m currently writing this article directly in the browser without worry of it crashing. And I can start writing on my computer and finish up on my iPad. Or I can post links directly from my iPhone. This is something I could never do with the original version of Colophon.

Of course, I always could have gone with a pre-existing publishing software, but none of the web publishing services I’ve looked at have really met my needs. And hell, it’s just fun to hack away on something outside of my typical domain. The app is in really great shape and I’m not entirely sure what to do with it. I could open source it, but I feel like it would be a waste of the good, hard work I’ve put into it. And then, of course, I could sell it, but I’m not sure there really needs to be yet-another-publishing-service. Suggestions are welcome.

On a more personal note

You may have have noticed a distinct lack of articles here in the last month or so and I’m happy to say it’s all been for a really good reason, and even happier to say the lapse should now be over.

I’ve just moved from Ottawa to New York City and started a new job as an iOS Developer at the New York Times. Working for Shopify was an incredible experience, and I was sad to leave, but I’m even more excited for this new stage in my life. I’m a Canadian living in the US, and I’m exploring a new city. It’s equal parts exhilarating and terrifying but I couldn’t be more excited for it.

A Proposal for a Deferred Network Request API on iOS

Earlier today, I tweeted about a feature I’d love to see on iOS:

As a subway passenger + iOS developer, it’d be lovely to have API for “Here’s a network request, please send when you can”

After a discussion with Sam Vermette of The Transit App (and fellow NSNorth speaker), I decided it would be a good idea to elaborate more about the idea here.

The basic premise of the idea is, I wish iOS provided an API for applications to submit to the OS a network request (like an HTTP GET or POST) which would be executed by the OS at the next available chance.

This is for times when the device is without internet access, like while riding the subway, but when the user still wants the action to happen. It would tell iOS “When we get network access again, I want you to do this request”, and obviously it should return the response to the application when it does (or on next launch).

An example of this would be using an article posting app on the subway. I might write a nice article on my phone while underground riding the subway, and I press “Post”. Because there is currently no internet access, the app hands off this network request to the OS to be executed at the next available time. I can then safely quit my article posting app, and know that when I get off the subway and my device gets internet access, that my article will finally be posted. When my app launches next time, it’ll get an NSData of the response of that network request.

If some of this idea sounds familiar, it’s because it would act as a nice compliment to Marco Arment’s proposal for recurring network updates:

The addition of one more multitasking service would solve this issue for a lot of application types: a periodic network request. Here’s how I would do it:

  • The application gives the system an NSURLRequest and an ideal refresh interval, such as every 30 minutes, every few hours, or every day.
  • iOS executes that request, whenever it deems that it should, and saves the response to a local file.
  • Next time the application launches, iOS hands it an NSData of the most recent response.

The two would be welcome on iOS and compliment each other very well. In short, I feel like the multitasking offerings on iOS are still lacking, and the OS often doesn’t reflect what people actually want to do with their devices. By enabling such an API, it would enable people to do more.


One thing I forgot to remember while writing this post was a potential implementation of it could be sort of be done today, but it would be a hack (thanks to Craig Stanford for reminding me about this).

You could do this by enabling “Significant Location Updates” in your application, and then trying to perform the network activity then. With these enabled, iOS will launch your app even if it’s been quit to tell you the device has moved to a new location (typically this is a granularity of a neigbourhood or so). So when the device moves, you get a chance to execute code, and this could include network activity. Instapaper, among others, has a feature to do this, but again, it’s a hack.

A Grammar for Programming. Chris Granger of Light Table, speaking of a recent Bret Victor talk:

What Bret really did was create a new grammar for data visualization, a new set of nouns, actions, and rules that allow you to express graphical representations in terms of geometry.

A natural editor is an editor that allows you to work with the grammar in terms of the end product. It is an environment that allows for “Direct Manipulation” - you don’t edit symbolic representations that ultimately turn into the output, you manipulate the end product itself.

That second paragraph is really, really important. I believe it’s a fundamental issue truly holding back software development. We don’t work in symbols or artifacts, we work on instructions which become symbols or artifacts. That’s pretty uncommon in the world of creation, and I think it’s a massive deficit which must be overcome.

How ‘Minority Report’ Trapped Us In A World Of Bad Interfaces. Christian Brown:

In the movie, when Tom Cruise straps on his infogloves and starts rummaging through the dreams of the psychic precogs, classical music begins to play. He stands in front of a semicircular computer screen, the size of a wall, and uses his hands to fast forward and rewind, to zoom in and out and rotate the screen. Many of them are laughable—he places one hand in front of another to zoom in, like a vertical hand jive. He goes to shake someone’s hand and all his files are thrown down into the corner. It’s, frankly, absurd—especially if you haven’t seen it since 2002. THIS is the thing tech reviewers are always comparing a new interface to? Even so, there are recognizable gestures that anyone with an iPhone has used. The pinch-zoom, the rotation, and the swipe-to-dismiss are all used daily by smartphone users. And while Cruise’s begloved gesticulation is silly on its face, everyone else in the movie has to use a regular old multitouch computer monitor.

This is annoying on its own, but:

In 2006, a year before the iPhone’s debut, Jeff Han gave a TED Talk about multitouch gestures, demonstrating the use of them to manipulate photos and globes. Throughout, he described gestures as an “interfaceless” technology, a way to intuitively zoom in and out and rotate around images without a “magnifying glass tool.” This is, of course, nonsense. While touching something to get more info may be intuitive, every other gesture demonstrated is noteworthy for how NON-instinctive it is. Does pressing with one hand and dragging with another really intuitively represent rotation? Especially of a 3D object, like a globe?

The press gets so caught up in whiz-bang “innovation” that we’re left with magically-shitty interfaces which are even more confusing than a keyboard and mouse. Where gestures are used constantly without merit and nobody knows how to use anything.

For more perspective, read “A Brief Rant on the Future of Interaction Design” or listen to Guy English’s latest episode of the Debug podcast (another NSNorth co-speaker!).

Towards Learning Lessons from Web Applications

There is a constant debate between web developers and native application developers on which platform is “better”, where, as you might expect, the definition of “better” varies greatly depending on your perspective.

Native app developers believe their software is better because they have more integration with the host platform: they get access to the user’s computer and things like drag and drop, or a tighter integration with the user’s information, like Calendars or Contacts. These applications also benefit from better performance, as the programs typically run natively, as opposed to being run interpreted by a web browser. Web applications will always be playing catch-up, according to some.

Web application developers believe their software is better because it can reach users on every platform and operating system. They don’t have to specify for only users of Macs or PCs or phones or tablets. Every user gets more or less the same experience. These applications also benefit from the nature of their environment: they actually exist running on controlled web servers instead of on the user’s local machine. The important consequence of this is software developers can rapidly change and improve the application without users having to take any action whatsoever. They simply visit the page and they’re always viewing the most recent version of the application.

I’ve been a native application developer for many years now and I’ve always preferred it for the aforementioned reasons, but lately I’ve been starting to see more of its flaws and fewer of its benefits. I’ve been looking at how human creativity works, and more importantly, what impedes it. And the common thread I’ve seen in all this research is a delay in seeing results of creation seriously impedes that creation.

That statement is true at all levels, from the way the code works all the way up to how a person uses the software. From the bottom, most modern web application software is written using dynamic languages, like Ruby or Python on the back-end to Javascript on the front end. The benefit of a dynamic programming environment is that changes can be made, and more importantly, reflected, at a much quicker pace than more static programming languages. Anyone who’s made test changes in Webkit’s “Javascript console” knows the benefits of having a REPL to play with application code. Changes can be tested while the code is running. Until recently, this wasn’t even possible on native iOS applications.

The more important benefit is however at a higher level. As a developer, there’s no impediment to getting new versions of my code out to users. I simply write the code, and when it’s been tested enough, I can deploy the fixes to my users. They don’t have to update anything, they just always get the most recent version of my application. Github illustrates this wonderfully, as they ship new code on the daily. “What version of Github are you using?” The current version.

Paul Graham wrote about this for his LISP-based web application from the nineties:

When one of the customer support people came to me with a report of a bug in the editor, I would load the code into the Lisp interpreter and log into the user’s account. If I was able to reproduce the bug I’d get an actual break loop, telling me exactly what was going wrong. Often I could fix the code and release a fix right away. And when I say right away, I mean while the user was still on the phone.

In the old days, computer programs were written on punch cards which were fed into the computer, tediously, for the machine to execute them. It wasn’t until hours later when the results of the program were printed back out to the programmer. There was a big delay between the programmer writing code and there being a solution to his or her problem. How barbaric.

These days, there’s a smaller delay between the programmer writing the code and seeing the result of the execution, but there’s still an immense delay, for native applications, between when the programmer writes the code and when the user sees the result. Our native applications are still shipped as though they’re printed onto some physical artifact, which must be moved through space — at the expense of time — to a customer. This was necessary for punch cards and it was necessary for floppies and CD-ROMS, but it’s no longer true in the age of the internet.

Shipping native applications, even in the best case, is almost always a slow process. There are long development cycles with tons of testing needed before the application can be shipped. And then, there’s a struggle to get users to update their applications to the latest version.

I think there are a few reasons why users don’t update their native applications:

  1. Because updates ship so infrequently, they usually involve many changes which break things.
  2. Because it’s tedious, mechanical, shit work they probably shouldn’t be doing. It should just be done for them.
  3. Because even if they wanted to, they often don’t know how.

I think #1 is the biggest culprit for those in the know. Experienced users have unfortunately experienced many poor upgrade experiences. But the experiences are so poor because the updates were so big and contained so many changes. And the updates were so big because users so infrequently update their software. It’s a vicious cycle and it needs to be broken.

The problem gets compounded when working with Apple’s App Stores, where even if developers wanted to ship on a regular basis, they have absolutely no power to do so. Instead, they’ve got to wait usually a week or more between shipping code and people being able to use it. Not only that, but while they’re waiting, they can’t ship any incremental changes lest they have to start the waiting period all over again. It really sucks.

I’m not entirely sold on web development as the one true way forward, but I do admit I admire many of the benefits such an environment provides. I want native development to learn its lessons. I want to ship software as frequently as I can. I want my users to feel like the users of Chrome or Chocolat, applications whose updates happen so frequently it’s basically invisible. If we could update native applications multiple times per week, it would become the norm. Update problems would be reduced because changes would be smaller and bugs would be easier to track down. And users would benefit most of all because they’d no longer be required to do anything — they’d just always get the best software.

Computer Science Education and Humans

Computer Science education is in need of vast improvement. We’re taught low-level details of how software works at an atomic level, but we ignore the human side of software. I’m not talking about user interfaces, I’m talking about ignoring the humans who make software and the humans who use try to get things done with it.

Everybody believes their line of work is an essential part of the world — and they’re completely correct — but our current age is one built precariously on science and technology. Almost all of Western human culture is either derived from or delivered through some kind of digital orifice. This means there is an incredible need put on those who create and build the technology, and because there’s a lack of education, this also puts an incredible strain on those very same people.

In other aspects of culture creation, in trades such as carpentry or graphic design, the education includes learning the constraints of the craft, (like the relationship between the wood and the saw, or how colours render differently between screen and print) and it also includes fundamental principles like aesthetics of form or typography — qualities of the trade which are the result of learning from human experience over the course of centuries.

Computer Science education focuses almost entirely on the former. Students are taught how the computer works, and, beginning at the theoretical ground, learn how software can be represented as fundamental processes (as described by Alan Turing) all the way up to how good object-oriented systems are to be designed. We learn about data structures and algorithms and we learn why some are suitable in some cases but not others. We’re essentially taught the mechanics, but we’re taught nothing of what properties emerge from these mechanics.

Our field is nascent, and although it looks like we’ve been stuck with things like unresponsive, unhelpful graphical user interfaces for a long time, they’re really just the beginnings of what interactive digital machines are capable of doing. What they don’t teach you in Computer Science is basically anything you can imagine is possible. The bigger problem is student imagination is stifled by the status quo, instead of being nurtured by education. We’re often asked not to re-think how to solve problems for people but instead taught how the mechanics of existing practices already work. We’re not taught to be brilliant, creative thinkers, but instead taught how to become cogs, manufacturing computer programs.

The saddest part is, those who we should be learning from remain mainly ignored in Computer Science education. There have been many great thinkers in our field, from Alan Turing to Stephen Wolfram, from Vannevar Bush to Douglas Englebart and Tim Berners-Lee, from Alan Kay to Bret Victor. There’s an absolute treasure trove of great thinkers in Computer Science (and thanks to the natures of computers, almost all their work is dutifully digitized and readily available) who go almost entirely unnoticed in Computer Science education. There are great minds, who have solved the same problems over and over again, or whose ideas were decades before their time, who go completely unmentioned in the four years of an undergraduate Computer Science degree. How can we call ourselves educated in this field if we know nothing of its masters?

We’re learning how to build bricks but not how to build buildings for we learn nothing about how architecture applies to humans, and we learn nothing from the great architects who’ve come before us.

We can fix this by rattling the cages. Those great masters who have come before us didn’t exist in a vacuum and they didn’t invent everything all on their own. They saw further by standing on the shoulders of giants. Their ideas are dangerously important, but they didn’t emerge out of the ether. And so like them, we need to learn from the greats. We need to learn not only about how to build software, but we need to question and examine the fundamentals of what we’re even building. We need to demand for an education where ignoring past bodies of information is a travesty, and we need to demand the same from ourselves. If you’ve already finished your university education, don’t worry, because we all continue to learn every day.

So read about and learn from the Greats. And more importantly, help others do the same. Start talking about a Great you admire and don’t shut up until everyone you know has read his or her works, and then you can start building off them.

The Deep Insights of Alan Kay. If you’re looking for a good introduction to one of the Greats, you’d do well to read through every link in this overview of Alan Kay’s career.

Bret Victor’s ‘Thoughts on Teaching’. Bret Victor:

Can you trust a teacher whose only connection to a subject is teaching it?

How can such a teacher know if what he’s teaching is valuable, or how well he’s teaching it? (“Curricula” and “exams”, respectively, are horrendous answers to those questions.)

Real teaching is not about transferring “the material”, as if knowledge were some sort of mass-produced commodity that ships from Amazon. Real teaching is about conveying a way of thinking. How can a teacher convey a way of thinking when he doesn’t genuinely think that way?

I’m sure many teachers spend their evenings thinking about teaching the subject. I have no doubt that these teachers love teaching, and love their students. But to me, that seems like a chef who loves cooking, but doesn’t love food. Who has never tasted his own food. This chef might have the best of intentions, but someone in need of a satisfying meal is probably better off elsewhere.

The cliché you hear is “Those who can’t do, teach” when in reality it’s those who can do who should be teaching.

Ex-Pixar Employees Are Building Intelligent Toys for Children. Toy Story is real:

Got that? We’re talking about children’s toys built by an AI scientist from where Siri was born, that tracks human movement, can interact with spoken words, is connected to the web and mobile by an engineer with a world-beating scalability background, promoted by an early advocate of blog publishing software that changed the world and designed by people behind the most popular children’s movies in history.

Pixar’s Senior Scientist explains how math makes the movies and games we love. Tim Carmody, writing for The Verge:

The topic of DeRose’s lecture is “Math in the Movies.” This topic is his job: translating principles of arithmetic, geometry, and algebra into software that renders objects or powers physics engines. This process is much the same at Pixar as it is at other computer animation or video game studios, he explains; part of why he’s here is to explain why aspiring animators and game designers need a solid base in mathematics.

Electronic Sensors Printed Directly on the Skin. Mike Orcutt for MIT Technology Review:

Taking advantage of recent advances in flexible electronics, researchers have devised a way to “print” devices directly onto the skin so people can wear them for an extended period while performing normal daily activities. Such systems could be used to track health and monitor healing near the skin’s surface, as in the case of surgical wounds.

This sounds an awful lot like the start to one of Paul Graham’s “Frighteningly Ambitious Startup Ideas”:

[O]ngoing, automatic medical diagnosis.

One of my tricks for generating startup ideas is to imagine the ways in which we’ll seem backward to future generations. And I’m pretty sure that to people 50 or 100 years in the future, it will seem barbaric that people in our era waited till they had symptoms to be diagnosed with conditions like heart disease and cancer.

The Super Debugger

In January, while still at Shopify, I released the Super Debugger for iOS, a wireless, interactive, and realtime debugger for iOS applications. That means you can debug your applications over wifi (and potentially, cellular), without needing to set breakpoints. You can send messages to your objects, you can see their state, and you can change their state all in real time. The project is open sourced on Github, too.

From the project’s homepage:

The Super Debugger (superdb) lets you debug in new ways lldb can’t: it allows you to send messages to the objects in your app, without the need to stop on breakpoints.

Use the powerful Shell on your Mac to inspect your objects, see changes instantly, and speed up development.

The project started as an internal “Hack Days” project at Shopify, where we got two days to start and “ship” (or at least demo) a project. As I’m a Cocoa developer, I had been thinking of ways to make development easier, and superdb was the result.

The Details

The Super Debugger builds upon F-Script, a Mac project by Philippe Mougin. F-Script has been around for probably close to a decade now, and it works as an object browser for Cocoa objects. Philippe refers to it as “a Finder for your objects”, which I think is a great description. It’s a programming environment with a Smalltalk-like syntax where objects can be inspected and messaged.

The project itself doesn’t appear to be maintained very much anymore and it was Mac only until Github user “pablomarx” got a version of it running for iOS. Even though the iOS code had rotted since pablomarx had ported it, it was still a great accomplishment, but it was more of a proof-of-concept than anything else. The testbed was an iOS application with a text field and an output log. It showed that it worked, but it wasn’t exactly useful.

The technology had been around for a while, and yet nothing useful was coming of it. I thought about it for a while and decided my Hack Days project was going to make use of this technology.

So I started by modernizing the iOS port of F-Script, wrapped it up in a network service, used some Bonjour magic to find running instances of it on a local network, wrote a socket protocol between the F-Script interpreter and a Mac app, wrote a command shell for the Mac app and presto, the Super Debugger was born.

Even though it might sound like a lot, the real meat of the operation took only those two days at Shopify. The technology had existed for years and yet all it took was a couple of days to make something tremendously useful out of it. It might sound kind of self-congratulatory, and it is as I’m very proud of what I’ve made, but my point is sometimes wonderful things are hidden under the word “just”. Sometimes there are brand new avenues we’d never even considered and all that was missing was the tiniest of pieces.

The bigger moral of the story is just because what you’re adding to something doesn’t seem like much, or isn’t difficult to create, doesn’t mean it can’t have a profound impact on what you’re making. Slapping a network layer on an interpreter someone else wrote and adding a few stolen interaction tricks may sound like cheating, but it’s thinking like this we desperately need more of in this world.

The Day Doesn’t Start At Midnight

There are many apps which try to help you out by aligning some of their functions to happen on a per-day basis, whether it’s a reminder or a calendar event, or some other kind of task which has a day-bound relevance. This is a good idea dogmatically but I’ve found all implementations to fail in a pragmatic way: the day doesn’t start at midnight.

The best example of an app violating this is Things (both for Mac and iOS), which has a handy feature for recurring todos. The gist of the feature is “repeat this task every X days (or weeks or months, etc.) after I’ve completed it” with the idea being, I’d like a repeating task, but I only ever want to see it at most once in a list — if I haven’t completed it by the time it’s scheduled to appear again, don’t show it until I’ve marked it as complete and the proper time has elapsed.

In theory this works really well. I’ve got a task to “do some dishes” once per day, but if I happen to miss a day, I don’t get two todos the next day, I just stick with the one. Once I check it off, it recurs again the next day, where the day starts at midnight. Here’s where the problem is:

I stay up a little late at night and usually go to bed between midnight and 2AM. As I’m getting ready for bed, I’ll often review the day’s tasks in Things and mark off my stuff as completed, often things I’ve forgotten to mark as completed while I was going about my day. So let’s say it’s 1AM on Tuesday (technically this is Tuesday, but since I haven’t yet gone to bed, it’s really still Monday to me) and I mark off my recurring “do the dishes” todo. That’s great, and I expect to see the todo again tomorrow (during the daytime of Tuesday). But this is where my view of the situation diverges from that of the software: to me, “tomorrow” means “any time after I wake up and get dressed but before I go to sleep at night”, but to the software it means “any time after midnight”.

What ends up happening, because of our silly disagreement, is Things thinks I’ve already marked the task as being done for what I’d consider “the next day” and instead won’t show me the task again until after the next midnight rolls around. So in this case, I don’t see the task at all on Tuesday and it doesn’t show up until I start using the app Wednesday morning.

In this case, it’s not so grave because, well I’ll probably see the dishes and remember to do them anyway. But it’s still an error in pragmatism for the software to do something like this. There are probably way more users who go to bed at 2AM than who wake up and start their day at 2AM. And yet our software almost always treats us as though we’re mechanically bound to clocks, that our lives are grasped tightly by their hands.

A slightly better example of software handling this is with Siri. If it’s a little after midnight on Monday (so technically Tuesday) and you say “Siri, remind me to do the dishes tomorrow morning”, Siri will respond with something to the tune of “Just to be sure, did you mean Tuesday or Wednesday”. This is a step in the right direction, but it’s still an extra step the person almost never needs to take.

How to solve this problem

The obvious first solution to this problem is to simply have a setting in your application which says “The day starts at X” and let the user pick a time. That works, but it still pretty much stinks because the user is going to have to set this for every app which supports it, and it might change over time as the user’s habits change (student life to working life to parenthood, for example), not to mention many users probably won’t dive through the settings and designate a particular time anyway, so the program remains daft, treating the user as if they’re a clock.

The better solution is to infer what time the person starts and ends their day. It’s pretty easy to figure out with some simple usage statistics, by using what times of day the app is used (of course treating weekends slightly differently). If the app is used late at night, then you learn usage patterns and adjust the day cutoff to match the usage. If the app doesn’t get used late at night, then you don’t learn anything, but you don’t need to because the app doesn’t get used late anyway, so it doesn’t matter. A simple example of usage recognition can be found in Bret Victor’s Magic Ink in the “Engineering inference from history” section.

If you have to, be smart by being stupid

At the very least, consider solving the problem by making the day end later than midnight. You won’t throw off anyone by making the day end at 3AM vs midnight, and there’s no sense pandering to the edge case of those starting their day that early anyway. It’ll make your software work more like people do.

The Programming Environment is the Programming Language

Software developers have really crappy tools. If we’re lucky, we’ve got some limited graphical tools for creating user interfaces and some form of rudimentary auto-complete, but our programs still exist in text files, which amount to little more than digital pieces of paper. We want to augment programming languages with new IDEs and tools, but it’s often painful to graft these features on existing languages. What we need are languages built around our tools, not the other way around.

The current best effort for better technical tools in my opinion is Apple’s Xcode, specifically the Interface Builder tool. As the name suggests, IB is a tool for creating user interfaces graphically. Instead of writing code to layout your interface elements, you use IB to position them. You get the benefit of seeing what your app is going to look like as you’re working on it, without the need to stop and rebuild your project with every modification.

Putting aside my qualms with how Interface Builder really works in practice, I do think it’s a great idea. But it’s really not enough, because the interface elements laid out are pretty dumb — you can’t really interact with them much until you actually build the application — which stops pretty short of allowing you to feel how your application really works in practice. You see how it looks with skeletal data, but you don’t get to see how it works.

Another element of Interface Builder are interface Bindings (these are Mac only), which, in theory, allow you to wire up your data to interface elements in your application without having to write any code to do so. Another incredible idea in theory but in practice it’s very difficult to get right, and it’s maddening to debug.

Why are these tools so hard to get right? It’s because we start with a programming language, which in many cases was developed decades ago during different constraints and conditions, and we try to graft modern tools and ideas atop it. In Objective C’s case, the language has its roots in the 1980s (or 1970s, due to C). We’re trying to solder on modern bits to old harnesses, what Wolf Rentzsch refers to as bolting on an air conditioner to a go-kart. Sure, it’s possible, but the effect feels pretty bad. It doesn’t mean air conditioning is bad, it just needs to be installed in the right environment.

Language makers need to change what they focus on. Instead of trying to add nice things to the environment (IDE, Tools, etc) that fit the language, they should instead design a good environment for building software and then design the language around that. Instead of taking decades-old programming language and grafting on an environment, I think it would better to first design a programming environment and then build a language to suit it.

As an example, I’m going to list some things I believe would be good in such a programming environment, and although this list is neither perfect nor exhaustive, I hope it illustrates the point of how programming languages can be designed in the future.

  1. Programmers should be able to see their changes reflected immediately. I’m definitely not the first one to believe in this principle and I’ve made my own stabs at it with the Super Debugger too. But it’s really tricky to get this right with programming languages which weren’t designed with this principle in mind. Imagine how much more flexible the language could be if this were intended behaviour?

  2. The environment should allow for connecting data to graphical representations without the need for excessive glue-code. This is what Bindings on the Mac attempt to solve, but they themselves are glued on. The environment should support this from the ground up so that programmers can easily do the repetitive task of attaching their database or web service data model to a graphical display.

  3. The environment should allow for better traversal of program code than just classes in files. Groups of related functions and methods should be brought together as the programmer is working with them. The programmer shouldn’t have to hunt for method declarations or implementations, nor should they have to hunt for documentation on the functionality. This should be brought to the developer’s field of vision as it’s needed.

    Relying on plain text files for source code drastically reduces the flexibility of the programming environment. Instead, the files can be stored in a smarter format as required by the programming environment. We already rely so heavily on our environments as is, as we have long method names which require auto-complete and stub/generated functions that we can never remember. And yet we feel shameful when we use a plain-text editor but we can’t remember how to code without the IDE. Why not instead embrace the IDE and not be ashamed when using a language-ignorant text editor?

    Features like auto-complete are great, when they work. But imagine how much better they would be if the language was designed from the beginning to support them?

These are just examples and many of them have existed as ideas for decades at this point, neither of which is important. The point I’m trying to make here is that whatever the principles of programming environment design are, it should be those principles which dictate how the programming language should work. We should strive to figure out the kinds of tools developers need to create exceptionally powerful programs, and then design programming languages to enable those tools. It doesn’t mean the IDE has to be written in the new language, but just that it should be written with it in mind.

Software is currently the best medium we humans have to express our explanations and explore our ideas. And yet the way we express these programs is limited to what are essentially digital versions of pieces of paper. It’s time we start building better ways to express ourselves with interactive media, and that means building the backbone language around the environment in which it lives.

If you find this idea intriguing, you should definitely come hear me speak at NSNorth in April 2013 in Ottawa. It’s going to be the start of something great.

Dr. Dobb’s Interview with Alan Kay. Wide-ranging interview with Alan Kay from July 2012. Some highlights:

Binstock: You once referred to computing as pop culture.

Kay: It is. Complete pop culture. I’m not against pop culture. Developed music, for instance, needs a pop culture. There’s a tendency to over-develop. Brahms and Dvorak needed gypsy music badly by the end of the 19th century. The big problem with our culture is that it’s being dominated, because the electronic media we have is so much better suited for transmitting pop-culture content than it is for high-culture content. I consider jazz to be a developed part of high culture. Anything that’s been worked on and developed and you [can] go to the next couple levels.

Binstock: One thing about jazz aficionados is that they take deep pleasure in knowing the history of jazz.

Kay: Yes! Classical music is like that, too. But pop culture holds a disdain for history. Pop culture is all about identity and feeling like you’re participating. It has nothing to do with cooperation, the past or the future — it’s living in the present. I think the same is true of most people who write code for money. They have no idea where [their culture came from] — and the Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.

On the Web:

Binstock: Well, look at Wikipedia — it’s a tremendous collaboration.

Kay: It is, but go to the article on Logo, can you write and execute Logo programs? Are there examples? No. The Wikipedia people didn’t even imagine that, in spite of the fact that they’re on a computer. That’s why I never use PowerPoint. PowerPoint is just simulated acetate overhead slides, and to me, that is a kind of a moral crime. That’s why I always do, not just dynamic stuff when I give a talk, but I do stuff that I’m interacting with on-the-fly. Because that is what the computer is for. People who don’t do that either don’t understand that or don’t respect it.

(I originally tried to link to the article on Instapaper but the link wasn’t public. If you use Instapaper, read it there so the article isn’t spread across four pages.)

What Every FaceTime Call Is Really Like



Mom is unavailable for FaceTime





[A blurry image of your mother appears]


[The image freezes; you hear no more sound]





[A blurry image of your mother appears]

“Hi, sorry about that. No, it happ—”

[The image freezes again, audio gets choppy]

“Hello? Hel—yeah I’m still here. No it’s just the internet. Are you downloading anything on your computer right now? No? Well maybe tr—”

[The video resumes and audio comes back]

“Oh there we go, hi! Yeah things are great how are you?”

[You both talk at the exact same time because the audio is lagging so hard]

“What? Can you repe—”

[You both do that again]

“Yeah sorry, I think it’s just—”

[The image freezes; you hear no more sound]




Mom is unavailable for FaceTime


[You put down your iPad, walk over to your phone, and call your mother and actually have a conversation with her, implicitly admitting to yourself and to her that sometimes newfangled technology is done for our own sake, long before it’s ready for people and the world they live in, and giving her one more reason to think she’s bad at technology, when in reality it’s the technology that’s bad at people. You’ll also realize, if only slightly more than before, that you need to reconsider the next time you decide to add a “cool new feature” nobody was actually asking for if it won’t actually fit into the way they live just quite yet.]

Why What You’re Reading About Blink Is Probably Wrong. Chrome engineer Alex Russell:

By now you’ve seen the news about Blink on HN or Techmeme or wherever. At this moment, every pundit and sage is attempting to write their angle into the annoucement and tell you “what it means”. The worst of these will try to link-bait some “hot” business or tech phrase into the title. True hacks will weave a Google X and Glass reference into it, or pawn off some “GOOGLE WEB OF DART AND NACL AND EVIL” paranoia as prescience (sans evidence, of course). The more clueful of the ink-stained clan will constrain themselves to objective reality and instead pen screeds for/against diversity despite it being a well-studied topic to which they’re not adding much.


And that’s what you’re missing from everything else you’re reading about this announcement today. To make a better platform faster, you must be able to iterate faster. Steps away from that are steps away from a better platform. Today’s WebKit defeats that imperative in ways large and small. It’s not anybody’s fault, but it does need to change. And changing it will allow us to iterate faster, working through the annealing process that takes a good idea from drawing board to API to refined feature. We’ve always enjoyed this freedom in the Chromey bits of Chrome, and unleashing Chrome’s Web Platform team will deliver the same sorts of benefits to the web platform that faster iteration and cycle times have enabled at the application level in Chrome.

Computer Science Education Needs to Teach Us About Humans

Computer Science education at universities in North America is typically a mix of Computers (learning programming language concepts, formal languages and proofs, data structures and algorithms, electronic architecture, etc.) and Math (algebra and calculus, linear algebra, statistics, number theory) but I think this education misses entire broad strokes of the elements involved in building software: making something to augment a human’s abilities.

Having a solid base in the Computer bits of Computer Science is essential as it enables the How of what a software developer makes. But it doesn’t shine any light on the Why a software developer is making something. Without knowing that, we’re often left shooting in the dark, hoping what we make is good for a person.

I’m proposing the education should focus less on mathematics and instead focus more on Human factors, specifically (but not limited to) Psychology and Physiology, the study of the human mind and the human body, respectively, and how the two parts interact to form the human experience.

By understanding the human body, we learn of its capabilities, and just as importantly, of its limitations. We learn about the ergonomics of our limbs, feet and hands, all of which inform the physical representations of how software should be made. We learn about a human’s capacity for sensing information, specifically from the eyes (like our ability to read, understand and parse graphical information like size, shape, and colour) and the hands (like our sophisticated dexterity and sense of tactility, texture, and temperature). Instead of pictures under glass, we’d be more informed to interact in more information rich ways.

By understanding the human mind, we learn how humans deal with the information received and transmitted from the body. We learn about how people understand (or don’t understand) the things we’re trying to show them on screen. We learn how people model information and try to represent our software in their minds. We learn that people represent some things symbolical and other things spatially, and we learn why that difference is important to building useful software.

We also learn how people themselves learn, how children are capable of certain cognitive tasks at certain ages, and how they differ, cognitively, from adults. This allows us to better tailor our software for our audience.

Psychology does even better than teaching us how a person works inside because it also gives us the beginnings of how people work amongst themselves, and how people share their mental models with each other. Since humans are inherently social beings, we should be taking advantage of these details when we build software.

All of these details are crucial for building software to be used by people, and nearly all of them are ignored by the current mandatory parts of Computer Science curriculum. We learn lots about the mechanics of software itself, but nothing about what we’re making. It’s like learning everything about architecture without ever having once lived inside a building. We build software with our eyes closed, guessing as to what might be useful for another person when there are libraries full of information telling us exactly what we need to know.

We need to stop guessing and we need to learn about who we’re building for.

Stikkit is a Neat Program That No Longer Exists. A web service that doesn’t exist any more, but sounds really cool. Merlin Mann wrote about it when it was still a thing:

As promised, I wanted to start sharing some of the reasons I’ve been digging Stikkit, so I thought I’d begin at the beginning: Stikkit’s use of “magic words” to do stuff based on your typing natural (albeit geeky) language into a blank note. There’s a lot more to Stikkit than magic words, but this is a great place to start.

More like this please!

News is Bad For You. Rob Dobelli:

News is irrelevant. Out of the approximately 10,000 news stories you have read in the last 12 months, name one that – because you consumed it – allowed you to make a better decision about a serious matter affecting your life, your career or your business. The point is: the consumption of news is irrelevant to you. But people find it very difficult to recognise what’s relevant. It’s much easier to recognise what’s new. The relevant versus the new is the fundamental battle of the current age. Media organisations want you to believe that news offers you some sort of a competitive advantage. Many fall for that. We get anxious when we’re cut off from the flow of news. In reality, news consumption is a competitive disadvantage. The less news you consume, the bigger the advantage you have.

News has no explanatory power. News items are bubbles popping on the surface of a deeper world. Will accumulating facts help you understand the world? Sadly, no. The relationship is inverted. The important stories are non-stories: slow, powerful movements that develop below journalists’ radar but have a transforming effect. The more “news factoids” you digest, the less of the big picture you will understand. If more information leads to higher economic success, we’d expect journalists to be at the top of the pyramid. That’s not the case.

He adds, and I agree, long-form, exploratory journalism is still very important and should exist. It’s time we have a method of expressing such inquiries in a way readers can better grasp and evaluate the ramifications they present. Videos and slideshows are not enough.

(See also Aaron Swartz’s thoughts on the matter)

Linger for iOS

I’m going to get right down to it: you should buy Linger for iOS (here’s an App Store link). The app lets you explore the Prelinger Archives, a collection of short movies, ads, PSAs and propaganda from the 20th century all on your iPad (it works great for iPhone too).

Even if you’ve never heard of the Prelinger Archives before, you’re probably still familiar with the style of videos you’ll find there, the old black and white movies, showing assorted clips with a wholesome-sounding narrator. Think “Duck and Cover” or “Reefer Madness”, and you’ll get the idea.

The app, which requires iOS 6, is a perfect display for the content. From the welcome tour to the multiple ways to browse content, you can tell the developer has put a lot of thought and care into the experience. It basically looks like what an iPad app from the nineteen-fifties would look like, replete with poster themes and the perfectly chosen fonts. This app nails the style for its content.

So once you pick a video to watch (the videos are usually short in length, but there are some longer ones in there, too) the presentation works just as you’d expect. Most of the videos are in black and white, although there are some colour films as well.

The app is fast, responsive, beautiful and clever. It’s a great way to learn about early film culture, and it’s a great contribution to the App Store, but I wanted to find out more about the motivation behind it, so I interviewed the man behind it, Chuck Shnider to find out more.

Jason: Where did the idea for Linger come from?

Chuck: Linger was really a classic “scratch an itch” software project. I’d spent time off-and-on watching ephemeral films on my iPad, but was always frustrated by how difficult it was to browse online to find good stuff to watch. Some of those difficulties are rooted in limitations of simple webpages, while others were related to inconsistencies with the films’ metadata. Sometimes it boiled down to something as basic as a film being split into multiple parts, but there being nothing obvious to tell you that there were multiple parts, and where they could be found.

The iPad itself is very well-suited for the task of one person exploring a series of short films. At some point I just got sick of the hassle of watching the films on the web and started looking at what sort of open data I could get from archive.org, and started coding up an app.

Jason: How was the app developed (and how long/when did you work on it, etc.)?

Chuck: The app was developed over a period of 9 months as a side-project. Along the way, I also wrote a Mac app to process and analyze the raw metadata from Prelinger Archives. I mentioned earlier about inconsitencies with the source metadata. By applying some love to the data, users of Linger are spared some of that. However, there are few shortcuts when it comes to cleaning up that data. I do what I can with automation, but many of the corrections I did were applied by hand based on errors that were discovered by hand.

Jason: Did you build on any existing 3rd party technologies or do anything exciting with Apple’s frameworks?

Chuck: Most of the third-party stuff I use is pretty mainstream: AFNetworking, mogenerator, TransformerKit, HockeySDK. Beyond the more common libraries, I use a third-party library called KSScreenshotManager to help automate production work for App Store screenshots, and also for images which form the basis of the app’s launch images.

The app requires iOS6, and makes heavy use of UICollectionView, Auto-Layout, and Storyboards.

For graphic assets, I’m using stuff from The Noun Project and Subtle Patterns. I also spent a considerable amount of time searching for suitable fonts which had licenses that permitted embedding within a mobile app.

These aren’t technologies per se, but I’d be remiss if I didn’t also mention the great support I’ve received from Rick Prelinger, plus the video and metadata hosting provided by the Internet Archives. For an individual developer to ship an app like this, it’s essential to find a way to host all the video content for little or no cost.

Jason: Why are you highlighting the Prelinger archives? Why are they important for people to see?

Chuck: The heyday of ephemeral films was really from the 1930s to 1960s. This was also the period where corporations and governments really learned to use moving images to influence public opinion through advertising, education, and propaganda. For anyone who is a student of media literacy, consumer society, or 20th century American history, I think there is much to learn from watching these films. If you like kitsch, of course there is tons of that. Watch a little longer, though, and it’s almost inevitable that deeper themes come to the surface.

There are a few large collections of ephemeral films online. Prelinger Archives was particularly suitable for making into an app because the collection is well-researched, and they have clear terms of use for both the films and associated metadata.

Jason: What do you imagine for the app in the future?

Chuck: In the near-term, I have a few features planned that are “creature comforts” for more habitual users. Beyond that, I plan to focus on helping new users find interesting films to watch. There is definitely a lot of room for improvement there, and I think it would help new users to become the sort of habitual users who end up recommending the app to friends, etc.

Jason: Do you have or plan to have any kind of analytics in the app to figure out what viewers are watching? It seems like that might help to fuel more people to discover new gems in the app.

Chuck: No formal plans, but if a large-enough community of viewers does form to make the numbers meaningful, then it is something worth looking into. I’d also want to feel like the extra value from “top viewed films” was worth it to users in exchange for the anonymized usage data they would need to provide. With an app like this, I think of the user experience as more like visiting a library than watching videos on YouTube. What films you are viewing should be considered private, unless you decide to share on-purpose.

On a related vein, I’m looking into creating a venue where I can write a bit about films I’ve personally found interesting. One side-effect of developing an app like this is you get to watch a lot of the films. I haven’t settled on a format yet, but it may start out as a blog, and in time also incorporate the blog content directly into the app as well.

You should check it out.

Linger Trailer. When I published my review of Linger the other day, I neglected to link to the trailer, so I’m doing that now. It’s so well done, it deserves a link of its own.

Almost Nothing Is Impossible

There’s a common superlative you hear, especially if you’re in the aspirational period of a high school Guidance Counsellor meeting, where you’ll hear “Anything’s possible!” And as you grow up, you learn to sort of filter out that sentiment. You learn there are limits, you learn there are things in the universe which seemed obvious, but really presented hidden meanings. Maybe you learned this literally and ran into a sliding glass door. It happens.

Learning limits is an important method of survival, but becoming a slave to those limits can turn you from living to coping. It can do more harm than good.

If you’re a software developer, you’re in a lucky place, because almost anything you can imagine is possible. Yes, there are limits to what hardware and software can currently do, and there are limits in what they can do eventually, too. But far and away, what is possible with a computer is quite limitless.

If you accept this, then new things start to become quite clear to you. You begin to see software as an illustration of your thought, as a way to explore the logic in your head. In a book you can write what you’re thinking, but in software you can express not only what you think, but you can see how that thought holds up to scrutiny. This is a small and simple truth with vast and curious implications. It means your thoughts can not only be heard, but explored, by generations both present and future.

When you don’t accept possibilities of software, you become limited in the world as it currently exists. We have so many inventions in software which exist merely because the inventor didn’t know they weren’t impossible.

When asked how he had invented graphical user interfaces, vector drawing, and object oriented displays all in one year, Ivan Sutherland replied “Because I didn’t know it was hard”. He had no preconceived notion of what was possible, so there was no hinderance. He just invented what he felt he needed in order to express his ideas.

Bill Atkinson infamously didn’t know overlapping windows were hard, but he invented them anyway. As an early Apple employee, Bill was one of the visitors of the famed Xerox PARC visits where he observed early versions of the Alto computer. He thought he’d seen a system with overlapping windows, so when he had to design a graphics system for the Mac, he had to build those too. Little did he know, the Alto never had overlapping windows to begin with. He had invented them because he was under the impression they were possible. Imagine!

Software today exists with much already established before us, but we’re still in the Incunabula stage. We’re still establishing the rules. Although there might appear to be much precedent in how something works, the truth is we’re in the very early stages. The rules are mutable and many are still yet to be written. We’re at a point where it’s critical to continue to explore new ideas, even if there’s only the slightest tinge of possibility, more often than not it turns out to be not only possible, but a superior approach.

Don’t ever let what you think you know dictate what you feel might be possible.

Bret Victor’s “Stop Drawing Dead Fish” Talk. Bret Victor:

People are alive — they behave and respond. Creations within the computer can also live, behave, and respond… if they are allowed to. The message of this talk is that computer-based art tools should embrace both forms of life — artists behaving through real-time performance, and art behaving through real-time simulation. Everything we draw should be alive by default.

Drawn. If you’re looking for an alternative way of interacting with drawn artwork from Bret’s talk.

We Need A Standard Layered Image Format. Gus Mueller (who knows a thing or two about image editors:

There’s my vote. Acorn has been using SQLite as its native file format since version 2.0, and it has been wonderful. When writing out and reading in an image I don’t have to think about byte offsets, I mix bitmap and vector layers together in the same file, and debugging a troubled file is as simple as opening it up in Base or your preferred SQLite tool. This sure beats opening a PSD file in a hex editor to figure out what’s going on.

Dr. Alan Kay on the Meaning of “Object Oriented Programming”. A friend and I were talking about Kay’s original intentions for OOP the other day, so I thought this link might be interesting to others, as well. It turns out, OOP is a lot less about encapsulated data and methods on the data, and a lot more about messages between “little computers”:

The original conception of it had the following parts.

  • I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning – it took a while to see how to do messaging in a programming language efficiently enough to be useful).

[…] OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them.

Thinking Like The Greats

I get really fired up when I think about one of The Greats, one of the people or teams of people in my field who I think are truly exceptional, who have contributed substantial work and who are rewarded copiously for it. They’re loved by some and reviled by others, but the common quality is they change things.

These are my heroes, the ones who make me want to get out of bed every day and be better than the day before at what I do. They set a bar for me, and I don’t want to be just like them, but I want to be great in my own ways. I’m not looking for fame, I’m only looking to be one of the Greats. I’ve been studying them for a while now and here’s what I’ve picked up so far, that they all have in common:

  1. They have Powerful Ideas.

  2. They act on those ideas.

In the simplest, most essential distillation, that’s what they do.

A Powerful Idea isn’t just a good idea, but instead one that lets us see farther. John W. Maxwell has this to say:

What makes an idea “powerful” is what it allows you to do; […] Powerful ideas are those that are richly or deeply connected to other ideas; these connections make it possible to make further connections and tell stories of greater richness and extent (p 187).

These are ideas like Hypertext, the Graphical User Interface, Cut Copy and Paste. Things that are simple in their own respect, but enable a tremendous new reach for humanity. They are not goals or destinations, but instead vehicles for getting us to the next step.

These ideas often don’t appear in dreams or apparitions but are instead culminations of years of dedicated study across a diverse set of fields. Alan Kay studied biology in university, which enabled him to see and create a design for Object Oriented Programming. He modeled computer programs after living cells. Many of Bret Victor’s great insights arise from an application of Edward Tufte’s information visualization principles: Show the Data and Show Comparisons.

When you study the powerful ideas of any field, you’ll almost always see the ideas emerging from analogy and synthesis of ideas from many other, seemingly unrelated fields. The insights often become obvious once you start looking past your own domain.

But a powerful idea is often not enough. Vannevar Bush’s As We May Think described the Memex, a mechanical, computerized contraption resembling a steampunk lovechild of the World Wide Web and Wikipedia, in 1945, and yet Bush’s work largely remained in obscuria for nearly fifty years. Why? Because the ideas were ahead of the technology at the time and they couldn’t be built. It’s not a failing of the quality of the invention (da Vinci hardly never could build any of his own designs at the time), but it strikes an important chord: to be a Great, you really need to be able to build it.

I think it’s critical to get these ideas into some form of tangible space, whether it’s a working prototype or a full-fledged product. People need to be able to see and use it, because an idea isn’t set in stone. It needs to be living and evolving. There needs to be a discourse and that’s certainly part of what makes the Greats so great, is they participate in this discourse.

These aren’t the only things the Greats seem to do, but they are the most fundamental and everything I’ve noticed seems to emerge from them. They’re important traits to know, but the most important thing isn’t to set out to emulate. It’s important not to walk in their footsteps but to instead stand on their shoulders.

Thoughts on Thoughts on Bret Victor’s “Learnable Programming”

Bret Victor published a long essay entitled “Learnable Programming” in September 2012 in which he described principles for creating both better programming languages and better programming environments for beginners and experts alike. But unfortunately, not everyone agrees with his stance.

Many expert programmers still exhibit forms of machoism when it comes to programming, which I find does more harm than good. Instead of acting like a voice of skepticism, it comes off as a voice of elitism with a disregard for the difficulty of beginners to develop computer program writing skills, the difficulty of programming as an expert, and the importance of a computer-literate population.

Mark Chu-Carroll objects to Bret’s stance, and the idea of programmer’s making it hard for beginners to program on purpose:

For some reason, so many people have this bizzare idea that programming is this really easy thing that programmers just make difficult out of spite or elitism or clueless or something, I’m not sure what. And as long as I’ve been in the field, there’s been a constant drumbeat from people to say that it’s all easy, that programmers just want to make it difficult by forcing you to think like a machine. That what we really need to do is just humanize programming, and it will all be easy and everyone will do it and the world will turn into a perfect computing utopia.

I don’t think Bret is arguing that at all. He’s not saying programmers have intentionally made it difficult for outsiders to join our circles, but that, well, it just is hard for outsiders to join. That instead of explicitly not doing our best, we have been doing our best but that our best isn’t good enough, and the sooner we can admit that and start improving, the better. This is not a bad thing. Improvement is what programmers do all day long, so why not also improve programming itself?

Mark continues:

To be a programmer, you don’t need to think like a machine. But you need to understand how machines work. To program successfully, you do need to understand how machines work - because what you’re really doing is building a machine!

Again, I don’t think Bret is advocating not understanding how a machine works. In fact, I think he’s advocating quite the opposite — by creating a better programming environment and language, it can better enable a new generation of programmers to visualize and understand their programs than ever before. I’ll return to this point in a moment.

John Palvus, writing for the MIT Technology Review backs this up:

Victor thinks that programming itself is broken. It’s often said that in order to code well, you have to be able to “think like a computer.” To Victor, this is absurdly backwards—and it’s the real reason why programming is seen as fundamentally “hard.” Computers are human tools: why can’t we control them on our terms, using techniques that come naturally to all of us?

The main problem with programming boils down to the fact that “the programmer has to imagine the execution of the program and never sees the data,” Victor told me.

Or as Bret wrote in his essay:

Maybe we don’t need a silver bullet. We just need to take off our blindfolds to see where we’re firing.

Neil Brown retorts to this statement that while showing visualizations to newcomers is useful, it’s a nuisance to experts:

One of the first things beginners do in any area is learn the terms, after which I believe the labelling of program constructs becomes annoying rather than helpful. We wouldn’t have a mouse-over helper in Maths saying ” ‘+’ is the symbol meaning add two numbers” or in French saying “Je means I” — you learn it early on, quite easily, and then you’re fine. The point of the notation is to express concisely and unambigiously what the program does. I can understand that the labels are a bit more approachable, but I worry that for most cases, they are not actually helpful, and very quickly end up unwieldy.

But again I feel like this is missing the point. I think the example of labels in the programming environment are really just a stepping stone — one stop on the road to being able to see and understand what a program is doing — but it’s not the only thing. Labeling the environment is one thing, but the concept can extend further to enable the experts to reach a higher ground. Sure, experts already know the syntax and probably most of the library functions too. Great, now that can be trivialized, and even better, new and more specific program parts can be visualized. Now things which are specific to the application can be labeled and explained, in context, for all developers of a given project.

Neil continues on other topics of visualization:

I propose that visualisation doesn’t scale to large programs. It’s quite easy to visualise an array of 10 elements, or show the complete object graph in a program with 5 objects. But how do you visualise an array of 100,000 elements, or the complete object graph in a program with 50,000 objects? You can think about collapsible/folding displays or clever visual compression, but at some point it’s simply not feasible to visualise your entire program: program code can scale up 1000-fold with the tweak of a loop counter, but visualisations are inherently limited in scale. At some point they are all going to become too much.

I think this is a very narrow-minded way to approach Bret’s essay. As someone who writes code blindly like we currently have to do, of course we’re going to have a hard time coming up with ways to visualize our data, but fortunately for us there’s a whole field to solve this very problem: Data Visualization (for anyone who’s interested in learning more on this topic, you absolutely should read the works of Edward Tufte). As programmers, we’re bad at visualizing data because we’ve never thought of it as a necessary skill. But once our eyes are open to the benefits of data visualization, then not only does it not seem impossible, it also starts to seem necessary.

Neil thinks we don’t have to see to understand:

Someone once proposed to me that being able to create a visualisation of an algorithm is a sign of understanding, but that understanding cannot be gained from seeing the visualisation. Visualisation as a manifestation of understanding, rather than understanding as a consequence of visualisation. I wonder if there’s something in that?

I disagree, and believe there’s much to be gained from understanding the relationship of seeing and understanding a concept. Alan Kay, building on the work of Piaget and Bruner had the insight which he summarized as follows:

Doing with Images makes Symbols.

This is a relationship between three human mentalities, where we work with body, the visual system, and the symbolic mind in different but complimentary ways. These act as a continuum of thought and interaction and movement within that continuum is essential. So to have real understanding of something on the symbolic level, it’s so much more natural to achieve this if you not only have images to work with, but also actually interact with those images as well. This is one of the essential, founding principles of the modern graphical user interface, a fact which is lost on almost all of its users.

Neil concludes his argument:

I like the blindfold metaphor, because it fits with our understanding of expertise: “he can do that with his eyes closed” is already a common idiom for expertise in a task. Beginner typists look at the keys. Expert typists can type blindfolded. Therefore at some point in the transition from beginner to expert typist you must stop looking at the keys. So it is with programming: you must reach a stage where you can accept the blindfold.

Which unfortunately also brings to mind The blind leading the blind metaphor. Lots of experts claim to be able to do something “with one hand tied behind my back”, but none would elect or suggest always working under such conditions. Nobody should be proudly held back at doing the best they possibly can at their work! Accepting the blindfold conditions for both beginners and experts alike is accepting the current state of programming as the best it can be, without any hope for improving the situations for generations to come.

At the end of the essay, Bret says what I believe is the real crux of his argument:

These design principles were presented in the context of systems for learning, but they apply universally. An experienced programmer may not need to know what an “if” statement means, but she does need to understand the runtime behavior of her program, and she needs to understand it while she’s programming.

Our society has deemed book literacy an essential skill as it’s a key mechanism in which our society thinks. Computers can offer an even better medium for society to think in, but only if we strive for computer literacy as well. And as with written literacy, this means both reading and writing. Expecting an entire society to write programs the way “experts” write them today is ludicrous, inscrutable, and counterproductive. If we’re to expect members of society to be computer literate, then we must create for them an environment where thinking can be expressed even better than on paper*.

*Yes, this is one reason why Cortex has yet to be released. I’ve yet to solve the problem of understanding and visualizing a Cortex plugin, and without that, it’s cripplingly difficult to create useful programs. This needs to be solved, because it’s irresponsible to expect developers to imagine it all in their heads.

We Don’t Need A New Apple Hardware Platform

It’s basically all you hear about in the Apple nerd press: “Apple’s working on a new device that’s going to revolutionize something or other”. It might be a watch, it might be a television, we don’t know what it is but all we know is somehow the device — the hardware — is going to make our lives better. I think that’s a myopic outlook that really offers nothing novel other than a new piece of metal and plastic to hold or gawk at. I don’t think we need new hardware.

Joe Cieplinski also ponders the merits of the rumoured “iWatch”:

What Apple does is identify a category of product in which there’s a lot of potential, where there will clearly be an audience, but where there’s currently no product that doesn’t completely suck. Then it makes a product that doesn’t suck in that category and mops up. It’s a beautiful strategy. And it happens to work.

So where are the crappy wrist computers? There’s the Pebble, I guess. A scrappy Kickstarter project that got some of us nerds excited last year. It’s severely limited in features and not altogether fashionable. So there’s potential for ass-kicking, no doubt. But is that all there is out there today? Where’s Microsoft’s wrist computer? Google’s? Sony’s? Samsung’s?


My point is, if this were the Next Big Thing, wouldn’t others be trying to do it already? Where’s the clear existing audience Apple wants to tap?

I agree with Joe: an “iWatch” certainly doesn’t match the pattern Apple usually follows, and I would say for a good reason: most customers aren’t asking for it and a newer, micro-device which (probably) runs iOS offers almost nothing above the current hardware we already have.

I don’t think Apple needs any new hardware at all in order to bring the world innovative new products; instead they need to provide us with new ways of working with software.

Chuck Skoda is also sick of only hearing about hardware:

If you have an ear turned to the Apple news beat, it seems as though new hardware product launches are all anyone cares about. While actually, software is responsible for an overwhelming majority of our experience using Apple platforms. This fact has been deemphasized by the Apple community over the last few years as we rush to see the next new device for our pockets, and it’s about time software gets its share of the attention.


Software is the real frontier on our new mobile platforms. Apple’s new hardware breakthroughs come on the order of decades, not years. Yes, I’m judging iPhone and iPad as a single line of innovation, because that’s how it really shakes out. Do the platforms serve different needs, yes, but they come from the same core ideas and design compromises. If you’re waiting for a watch to come change your life, you might as well buy Google Glass (is that supposed to be plural, I can never tell) and get it out of your system.

Whether or not Apple continues to release new hardware platforms is still an unknown, but my disdainful guess is they probably will keep releasing gizmos and ignore the bigger picture of the software that runs on them. It’s what people seem to care about, and it’s what sells in the press.

And why do we care so much about the hardware anyway? I think it’s because, nerds though we may be, it’s still much easier for us (and especially for non-nerds too) to understand something physical than it is to understand something abstract like software. Physical things are tangible but they ultimately depend on the abstract. Of all the physical inventions through the history of humans that we know of, all of them have arisen from a mental, abstract thought. And the best ones, written language, the printing press, the World Wide Web, and even in some regards the handaxe, all of these best inventions allowed for expanded thought and new physical inventions. But none were purely physical.

And a technological society based solely around physical devices is one that lacks imagination to truly take advantage of all those lovely hardware platforms anyway. It would be like a literary society obsessed with printing presses and cover stock. And yet that’s exactly what we expect of Apple and Google and Facebook and all the other tech companies.

I’m not saying there is no room for hardware improvements either evolutionary or revolutionary. I think it’s great for Apple to continue iterating on the Mac, iPhone and iPad and continue to bring us better battery life, performance, and graphics. And I think there are still many more revolutionary improvements which can be made to products of their ilk: things like print-resolution displays (Retina displays are a great step, but they still pall in comparison to the information density we expect from a printed book or newspaper); light, thin, and flexible computers that can be carried around and manipulated as easily as paper; and tactile interfaces so that we can make better use of our extremely dexterous and sensitive hands and fingers when exploring software.

But all of these hardware advancements should come to facilitate the software, not to sell more hardware or to fulfill some science fiction pipe dream. It’s not time to stop thinking inside the box or outside the box. It’s time to stop thinking about boxes altogether.

“Sometimes We Kick Tires. Sometimes We Buy a Car”. Joe Cieplinski, writing about App Store trials in response to Marco Arment:

I just want my phone and my iPad to do a lot more than “apps-as-entertainment” allow them to do, too.

We’re not seeing a more sophisticated level of software on iOS not because the iPad is a weak computer. Not because touch interfaces are toys. But because the economics of the App Store make sustaining such an app near impossible. It’s simply not worth the investment.

Exactly. If you charged 50 bucks for an app that actually did something, you’d probably lose a lot of sales vs selling for 99 cents. But I think software developers shouldn’t let that scare them away from making sophisticated apps. Short comic strips probably get a much larger distribution than novels, but that doesn’t mean novels shouldn’t still be written.

“Getting to Simple”. Jonathan Edwards, creator of the Coherence Programming Language on why he thinks programming is difficult:

There is one gigantic problem with programming today, a problem so large that it dwarfs all others. Yet it is a problem that almost no one is willing to admit, much less talk about.


Too goddamn much crap to learn! A competent programmer must master an absurd quantity of knowledge. If you print it all out it must be hundreds of thousands of pages of documentation. It is inhuman: only freaks like myself can absorb such mind boggling quantities of information and still function. I wager that compared to other engineering disciplines, programmers must master at least ten times as much knowledge to attain competence.

I agree. There are so many things you have to learn in order to get anything “on the page” for any kind of programming. The thought of teaching any of my non-programmer friends or relatives how to write even a simple iPhone app gives me a shudder. There are so many necessary parts to deal with before any real work can be done.

Thankfully, there are some other languages which involve significantly less up-front cost to get something onto the page, but in order for a newcomer to understand what they can put on the page, they’re still limited by needing to look it all up.

Jonathan suggests how to fix this:

By far the most effective thing we can do to improve programming is: Shrink the stack!

I am talking about the whole stack of knowledge we must master from the day we start programming. The best and perhaps only way to make programming easier is to dramatically lower the learning curve.


To shrink the stack we will have to throw shit away.

I agree we need to lower the learning curve by requiring less of newcomers to get started, but I don’t think this comes by eliminating things necessarily. I don’t think he’s suggesting we remove features in the sense of what a language can ultimately express, but instead cruft like vestigial APIs. It’s cool to abstract them away but I still think that’s missing the mark a little bit.

That would be like trying to get more people interested in writing fiction by either removing words from the vocabulary or by creating new metaphors/symbols for complex ideas. Creating new metaphors for complex ideas is a great skill and tool for writing, but it’s not necessarily one that makes writing itself easier.

I think one of the keys to creating a society where everyone can program is to change the nature of what it means to write a program. I think we need to have it possible for people to express their intent in a more natural way. When humans don’t know what a word means, they infer it from the surrounding language. When humans don’t have a word for a certain meaning, they create one to fill that gap. Why can’t programming be so natural?

“Release Notes” Podcast. Joe Cieplinski on his and Charles Perry’s new podcast:

A little while back my friend Charles Perry and I decided to try our hand at putting together a podcast. While we’re fully aware there are lots of great tech podcasts out there vying for your precious listening time, we thought together we could offer our own spin on things and add a bit more to the conversations going on in the independent iOS and Mac development communities.

I’m a big believer in giving back to the community in any way I can. While my occasional rants on this blog are one of my favorite ways to do that, I also thought maybe it was time to start using my physical voice as well as my internal one. Plus, having a discussion with another developer who might actually disagree with me on occasion could certainly be interesting and beneficial to shaping my views. Charles is a really smart, opinionated guy, so hashing out these topics with him made perfect sense to me.

In the first episode they discuss tech conferences, and I was nodding my head in agreement the whole time.

A Review of “Lean In”. Lydia Krupp-Hunter on Sheryl Sandberg’s “Lean In”:

This book is incredibly empowering, but also terrifying in that Sheryl confirms the vast majority of my fears in my career. It’s frightening because to have my fears enumerated and validated by such a successful woman, along with an equal amount of incredible advice for combating these concerns and succeeding in our chosen careers leaves little reason to not confront them head-on. She confirms the ramifications of female success that are easy to imagine for any woman who was bullied for good grades in school or who has ever watched a comedy movie about a working woman trying to ‘have it all’. She confirms that success for women will make us less likeable, and that we underestimate ourselves, and that we pass on opportunities that men with the same skills would seize. Read this book and Sheryl Sandberg will effectively deny you of the option to let your fears control any of future decision making.

This sounds like mandatory reading for people of any gender in our industry.

Additional Notes on “Drawing Dynamic Visualizations”. Bret Victor:

Last week, I released a talk called Drawing Dynamic Visualizations, which shows a tool for creating data-driven graphics through direct-manipulation drawing.

I expect to write a full research report at some point (at which I’ll make the research prototype available as well). In the meantime, here is a quick and informal note about some aspects of the tool which were not addressed in the talk.

Top Ten Things They Never Taught Me In Design School. From a Michael McDonough talk. Some of my favourites:

5. Start with what you know; then remove the unknowns.

In design this means “draw what you know.” Start by putting down what you already know and already understand. If you are designing a chair, for example, you know that humans are of predictable height. The seat height, the angle of repose, and the loading requirements can at least be approximated. So draw them. Most students panic when faced with something they do not know and cannot control. Forget about it. Begin at the beginning. Then work on each unknown, solving and removing them one at a time. It is the most important rule of design. In Zen it is expressed as “Be where you are.” It works.

Getting something “onto the paper” is an under-appreciated tool.

9. It all comes down to output.

No matter how cool your computer rendering is, no matter how brilliant your essay is, no matter how fabulous your whatever is, if you can’t output it, distribute it, and make it known, it basically doesn’t exist. Orient yourself to output. Schedule output. Output, output, output. Show Me The Output.

I’ve got two thoughts on this:

  1. Dissemination trumps innovation nearly every time. You might have invented the greatest thing ever, but if someone else can get out their lesser invention to more people, it’s going to beat you out. I don’t think this is really what the above quote is referring to, but it reminded me of this.

  2. Get in the habit of regular “releases”, whether this is actually releasing your product, or just checkpoints, or even just having a weekly or daily structure. Aim for completion on this schedule and get in the habit of getting something “out”.

(via Ryan McCuaig)

Dictionary of Numbers. Glen Chiacchieri:

In a blog post, the meteorologist Dr. Jeff Masters talks about the largest US wildfires of 2012. Masters mentions that the largest fire burned about 300,000 acres before it was contained. I have no idea how much 300,000 acres is or what types of things are similar sizes and I suspect few other people do, either. But we need to understand this number to answer the obvious question: how much of the United States was on fire? This is why I made Dictionary of Numbers.

I noticed that my friends who were good at math generally rely on “landmark quantities”, quantities they know by heart because they relate to them in human terms. They know, for example, that there are about 315 million people in the United States and that the most damaging Atlantic hurricanes cost anywhere from $20 billion to $100 billion. When they explain things to me, they use these numbers to give me a better sense of context about the subject, turning abstract numbers into something more concrete.

When I realized they were doing this, I thought this process could be automated, that perhaps through contextual descriptions people could become more familiar with quantities and begin evaluating and reasoning about them. There are many ways of approaching this problem, but given that most of the words we read are probably inside web browsers. I decided to build a Chrome extension that inserts human explanations of numbers into web pages.

Cancer and Startups. Powerful and moving words by Chris Granger of LightTable:

I’ve never met a stronger person. She has lasted through doses of poison that would’ve easily killed any one of us “healthy” people, and she has done so with a degree of poise that is truly unfathomable. In our little startup world, the words tenacity and perseverance are thrown around a lot, but in that context they seem hollow and largely meaningless. Tenacity is far more than simply making it through tough times, and it’s not just a matter of finding a way “back to good.” Kristie has shown me that tenacity comes from living for a purpose, from believing in something so fully that it keeps you alive through six rounds of injecting drain cleaner into your veins. By that definition, I haven’t seen much tenacity in the Silicon bubble many of us call home. […]

I’m doing this because I believe that this is the greatest contribution I can make.

I could’ve become a doctor. All signs pointed to me likely being a very good one. In doing so, I would have gone to work and done my best to save lives every day. In that context, how is some programming environment a greater contribution to the world? Truthfully, it wouldn’t be if I just set out to build an IDE. But that’s not what I did - Light Table is just a vehicle for the real goal. While an IDE probably won’t directly save someone’s life, the things people are able to build with it could do exactly that. My goal is to empower others, to give people the tools they need to shape our lives. Instead of becoming a doctor, I have an opportunity to improve an industry that is unquestionably a part of the future of all fields. Software is eating the world and analytical work is at the core of advances in medicine, hard science, hardware… Human innovation throughout history has been driven by new tools that enable us to see and interact with our mediums in a different way. I’m not building an IDE, I’m trying to provide a better foundation for improving the world.

It’s something I think about a lot too, although thankfully not under such tragic circumstances. But it’s important for every software developer to consider what impact they’re having on the world. It’s important to consider if what I’m doing is making the best contribution to the world, or am I just following trends and making a buck.

I’ll probably never write software for medical patients, and I’ll probably never write software which lands a rocket full of people on Mars. But if I can write software that helps someone who will do those things, then I will have done my job. If I can enable a scientist or a researcher or even enable a child to express creativity or ideas more clearly, then I will have made my contribution.

The question we should all ask ourselves is: Am I doing that?

Whey Too Much: Greek Yogurt’s Dark Side. Justin Elliot:

Twice a day, seven days a week, a tractor trailer carrying 8,000 gallons of watery, cloudy slop rolls past the bucolic countryside, finally arriving at Neil Rejman’s dairy farm in upstate New York. The trucks are coming from the Chobani plant two hours east of Rejman’s Sunnyside Farms, and they’re hauling a distinctive byproduct of the Greek yogurt making process—acid whey.

This isn’t just the most beautiful farming related websites I’ve ever seen, but also one of the most beautiful websites I’ve seen period.

Allo?. Jean Jullien:

“Allo?” is my first solo show in London, at Kemistry gallery Intrigued by everyday life and human interaction, Allo? explores our social and asocial behaviours, the relationship between people and how we communicate with one another.

Don’t Use UISwipeGestureRecognizer. Ash Furrow:

When using gesture recognizers, it is almost always far, far better to use UIPanGestureRecognizer than UISwipeGestureRecognizer because it provides callbacks as the gesture takes place instead of after it is said and done.

If your app feels kind of wooden, this might be one reason why.

Use Class Methods for Cocoa Singletons

Here’s a common pattern I see all the time in Cocoa development involving Singletons (let’s put aside any judgement as to whether or not the Singleton pattern is a good one and just roll with this for a moment): the singleton class Thing exposes a class method called +sharedThing which returns a shared instance of the class, and then has a bunch of instance methods to do real work. Here’s what the interface might look like:

@interface Thing : NSObject

+ (instancetype)sharedThing;
- (void)doYourStuff;
- (void)setGlobalToggleEnabled:(BOOL)enabled;


It’s all your standard fare. When your client wishes to use it, you end up with the rather silly looking:

[[Thing sharedThing] doYourStuff];
[[Thing sharedThing] setGlobalToggleEnabled:YES];

Every time I want to do something with the singleton, I’ve got to first request it from the class, then I send that instance a message. It’s straightforward enough, but it gets tedious real quick, and it begins to feel like a part of the implementation is leaking out.

When I use a singleton class, I shouldn’t really have to care about the actual instance. That’s an implementation detail and I should just treat the whole class as a monolithic object. I’m sending a message to the House itself, I don’t care what houseling lives inside.

So instead, I’d recommend hiding the sharedWhatever away from clients of your API. You can still just as easily have a shared, static instance of your class, but there’s no need for that to be public. Instead, give your class’s consumers class methods to work with:

@interface Thing : NSObject

+ (void)doYourStuff;
+ (void)setGlobalToggleEnabled:(BOOL)enabled;


@implementation Thing

+ (instancetype)sharedThing { /* return it as usual */ }

+ (void)doYourStuff {
    Thing *thing = [self sharedThing];
    thing.privatePropertyCount += 1;
// etc.

If your singleton class needs to store some state (and please try really hard to avoid storing global state), you can still use private properties (via the Class Extension) and expose necessary ones as class methods, too. Exposing global state this way is a bit more work, but doing this work is kind of a natural immune response of the language to discourage you from doing so, anyway.

Sometimes singletons are a necessary evil, but that doesn’t mean they necessarily have to be unpleasant. Hiding away the implementation detail of a “shared instance” frees other programmers from having to know about the internals of your class, and it prevents them from doing repetitive, unnecessary typing.

Let’s class up our singletons.

Engineers See a Path Out of Green Card Limbo. It seems to me like if a foreign student trains at a US school, then it would make sense for the US to allow that student to work freely in the country, instead of incentivizing them to leave for another country.

Followup on “Use Class Methods for Cocoa Singletons”

Yesterday I published an article asking Cocoa developers to rethink a common “Singleton” pattern and to improve it for our sanity:

I recommend hiding the sharedWhatever away from clients of your API. You can still just as easily have a shared, static instance of your class, but there’s no need for that to be public. Instead, give your class’s consumers class methods to work with.

I received three kinds of feedback for this article, the first kind being agreement, and that’s all there is to say about that.

The second theme was “It’s not the convention in Cocoa”. I think the reason for this is most of the time, developers are confusing a similar but different pattern used in Apple’s frameworks. The most common example is NSUserDefaults:

[[NSUserDefaults standardUserDefaults] setBool:YES forKey:SOLUsesStringConstants];

This looks a lot like a singleton but it isn’t. It’s just a way to access the standardUserDefaults, a pre-made object which your app will likely want to interact with. But it in no way implies or means you can’t create your own. The same pattern applies for other classes like NSNotificationCenter and NSFileManager to name but a few.

The third bit of feedback is where I’m a bit foggy, and that’s about the testability of hiding the shared object. I don’t do unit testing very often, but when I do I haven’t run in to any issues. From a fundamental point of view, I don’t understand why hiding the shared object should make testing any more difficult (I’m not being coy or shitty, I legitimately just don’t know). As far as I can tell, you’ll still be testing the public interface of your class, and that should be enough. But if I’m missing something (and this is entirely likely) then I’d love to know about it. Write about it on your website or email me.

“These eyes cry every night for you”

Or: Jason Brennan gets sentimental about discovering his Canadian pride only after become an expat.

After spending the first twenty-four-and-a-half years of my life living obliviously as a Canadian, in January 2013 I moved to the United States to live with my girlfriend in New York City. My time here has been nothing short of fantastic, but it wasn’t until I got here that I realized what my home country meant to me.

This is not the simple story of “you don’t know what you’ve got till it’s gone”, although there are some aspects of that. But I think the real crux of it is the culture of my home country was so subliminal to me, being immersed in it, that I didn’t even realize it existed until it was missing.

Growing up in the Maritimes I was submerged in — drenched by — CanCon, the CRTC (Canada’s version of the FCC, essentially) mandate to play at least 30 per cent Canadian content on the airwaves. This meant Canadian artists were given a government-backed promotion on the radio and television, to give them a fighting chance against the seemingly limitless American music industry. For someone developing a musical taste in the early 2000s, this meant hearing lots of Rush, The Guess Who, and The Tragically Hip (the latter I specifically loathed). But what I didn’t realise was, mandated or not, these artists (and many others, like 54-40, I Mother Earth, Our Lady Peace, Bare Naked Ladies, hell, even Nickleback) were indeed infused into Canadian culture, a culture in which I was unknowingly a member. To me, they were just overplayed bands, constant requests on the 105.3 FM (“The Fox”)’s “Drive At Five” request line, repetitive and unoriginal staples and traffic-jam anthems.

Almost always, the songs were written about subjects I could barely relate to, whether they were from the vast Canadian Prairies I’ve never seen, or even the coastal songs of choppy Atlantic waters. From the cushy every-town valley that is Fredericton, New Brunswick, I found little to relate to with the rest of Canada’s geography or the songs about her. What I never realised, however, was there was something subtle amidst. There was an effect I could not smell or taste or hear, and that was although I could never relate precisely to any of these stories, I could relate to them as a Canadian on some level. I could relate on the level of being aware and surrounded by diversity of all levels. Diversity of geography and of heritage and of language and of politic. These are some of the truisms of Canadian culture, and they were utterly invisible to me living in Canada.

Since leaving Canada, however, these things became almost instantly and painfully apparent to me. Even though I could never relate to the Prairies, they were at least a part of my culture, even if the culture of New Brunswick was “we don’t get the Prairies”. And now, I’m here and that’s sorely lacking. There’s a hole where a misunderstanding of my Canadian culture used to be. Now there’s nothing and I’m choosing to identify that as pride.

You can pry my u’s from my cold dead fingers. YYZ is a second Canadian anthem, and it’s pronounced zed, not zee, zed. I pretend to understand Marshall MacLuhan, the Tragically Hip’s lyrics, and Canada’s foreign policy. My name is Jason Brennan, and I Am Canadian.

Your First iOS App Book. If you’re following WWDC and wishing to build your own apps, check out my friend Ash’s new book. He put a ton of work into it, and it’s well done.

Feeling Democratic. Ryan McCuaig on what’s actually the big deal about the new iOS UI (hint: it’s not the icons):

The frameworks for providing parallax effects based on the gyroscope and adding physics to enhance the illusion that real things are being manipulated are incredible. Motion effects and dynamics are now very easy to apply and play with, which democratizes them and makes it possible for the less technically-inclined among us to participate in building up the relatively uncharted design language around them.

The closest comparison I can think of is the effect the LaserWriter had on print design. I expect the same period of taking things way too far and backing off. But the LaserWriter completely transformed print design. I expect the same here.

15 Pop-Cultural Abysses From Which There Is No Escape. Kevin Fanning:

“What is happening?” he said, staring blankly at the screen. “I don’t get it. What is this?” There was no one in the apartment to answer him. “These episodes feel so different! Why isn’t this more like what I remember? Why aren’t I laughing?”

The cop wanted the show to be as funny as he remembered. He wanted it to reveal itself more openly. He wanted it to make him laugh and not make him wonder what was going on. He slammed his Macbook Air shut and stomped around his apartment. He wanted to tweet about his frustration, maybe see if other people shared his feelings, but he didn’t want to be accused of spoiling season four for other people who hadn’t begun watching it yet.

“I’m so angry!” he said. He stopped and stood very still in the middle of his apartment. “Ugh! So mad!” he said. “I feel like I could …” his mind scanned every file in its memory for the aptest word, the metaphor, the action that would properly convey the feelings he was experiencing. “I feel like I could slap a vagina,” he thought.

Well written satire making me cringe all kinds about myself.

A Review of Star Wars Episode I. DirkH:

The original trilogy gave us a window to one of the most interesting universes ever created. It is a fantasy, filled with amazing aliens and wonderful characters. The Phantom Menace was the film that could expand on this universe. We could have gotten more fantastical worlds, more foundation to the mythos. What we instead got was Tatooine……again. It could have traveled the entire universe but it instead gave us a planet we already know. This is but a tiny example of the unbelievable unimaginative feel this film has. This is most present in its plot. We start out this film with a dispute about tax regulations. Really? What a hook! I’m riveted!

The Star Wars Saga: Machete Order. An alternative way to watch the Star Wars series:

How can you ensure that a viewing keeps the Vader reveal a surprise, while introducing young Anakin before the end of Return of the Jedi?

Simple, watch them in this order: IV, V, I, II, III, VI.

George Lucas believes that Star Wars is the story of Anakin Skywalker, but it is not. The prequels, which establish his character, are so poor at being character-driven that, if the series is about Anakin, the entire series is a failure. Anakin is not a relatable character, Luke is.

This alternative order (which a commenter has pointed out is called Ernst Rister order) inserts the prequel trilogy into the middle, allowing the series to end on the sensible ending point (the destruction of the Empire) while still beginning with Luke’s journey.

Effectively, this order keeps the story Luke’s tale. Just when Luke is left with the burning question “how did my father become Darth Vader?” we take an extended flashback to explain exactly how. Once we understand how his father turned to the dark side, we go back to the main storyline and see how Luke is able to rescue him from it and salvage the good in him.

Also, like the story itself, there’s a twist:

Next time you want to introduce someone to Star Wars for the first time, watch the films with them in this order: IV, V, II, III, VI

Notice something? Yeah, Episode I is gone. While I don’t think the “Hayden’s Anakin’s ghost at the end of ROTJ” is a real problem, Rod Hilton makes a great argument for this order of viewing. It’s at least worth reading through his argument, if you’re a Star Wars nerd yourself.

(via Ash Furrow (IRL))

Planet Zoo (2010). Anthony Doerr, writing about what we all collectively do to the planet, and what we all collectively don’t do to help it:

In most American feedlots, beef cattle live their lives standing in or near their own manure. E. coli O157:H7—often found in cow feces—infects about 70,000 Americans a year and kills about 52. Undercooked or raw hamburger has been implicated in many of the documented outbreaks.

What has been our solution? Take the cows out of their own shit? Not quite. Instead we’ve decided to ramp up the antibiotics and treat ground beef with ammonia-drenched filler. We love technological fixes that allow us to preserve our existing systems. Professional football players are getting too many concussions. What’s our solution? Lobby for better helmets. Cheap calories are producing heart disease in too many Americans. What’s our solution? Give people anti-cholesterol statins that may be linked to anxiety and depression.

Look, I wouldn’t trade the 21st century for any other. We have toilet paper and vitamin-fortified milk and a measles vaccine. We can buy avocados in Fairbanks in January. But sometimes, particularly in the United States, we tend to put too much faith into the transformative powers of technology. Is progress really a curve that sweeps perpetually, unfailingly higher? Wasn’t toy-making or winemaking or milk-making or cheese-making or cement-making sometimes performed with more skill 300 or 700 or 1,900 years ago? I think of a tour guide I once overheard in the Roman Forum. She pointed with the tip of a folded umbrella at an excavation and said, “Notice how the masonry gets better the earlier we go.”


There’s mercury on our mountaintops and antidepressants in our groundwater. Earthworms in American farm fields have been found to have caffeine, household disinfectant, and Prozac in them. Scientists have found antibiotic-resistant genes in 14 percent of the E. coli in the Great Lakes. Maybe even more astounding, they’ve found antibiotic-resistant E. coli in French Guiana, in the intestines of Wayampi Indians—people who have never taken antibiotics.4

With every year that passes, Earth becomes a little more like a gorgeous, huge, and mismanaged zoo. Is it really relevant anymore to argue that one thing is natural while another thing is not?

Remember that whole bit in Slaughterhouse Five about the zoo?

Books for Brennan in Brooklyn

I’ve been running Speed of Light since early 2010, and it continues to be a more rewarding experience with every passing day. I’ve used it as a place to explore my thoughts and expand my writing abilities, and I’ve made some friends in the process. Though my readership is modest, it is also thoughtful (and handsome).

There seems to be a trend when someone such as me has a website such as I do reaches a point in their website writing or readership when a decision is made to start making profit. This is not a bad thing, and many writers with numerous readers make this decision to go at least semi-pro with their writing.

I don’t think this is the right path for me, first and formost because I wouldn’t know where to begin without making it shitty and ensuring failure. I just don’t have enough readers to attract any kind of moderate money from this website, and nor do I want to resort to tactics to try and coerce such a readership. My articles may not make the rounds on Twitter or make it to the top (or even bottom) of Hacker News, and I may not get errant clicks because of the inflammatory or salacious headlines I choose not to write, but that’s something I’m damn proud of and I don’t intend on changing.

So instead of trying to convince myself of a goal I really don’t want, and then inflict upon my readers tactics to achieve that goal, I want to try something different.

I’m not taking this website full time, and I’m not even taking it semi-pro. I’m not adding advertising or sponsorships and I’m not adding a membership. In fact, you could have skipped this altogether and not seen any change, and that’s OK with me.

Enter the Tip Jar

What I’m doing is basically putting out a tip jar, nothing more and nothing less. It’s a way for people who like what I already write to encourage me to write more in the same vein. Here’s how it works:

  1. If you like what I write so much that you want me to write more of it, you can go through my Amazon wishlist and buy me something from it.
  2. If you don’t want to, then you don’t. Nothing changes.

I’m not trying to sway anyone to do this, I’m just trying this as an experiment to let you do it, if you’d like. The items on my wishlist tend to be about the topics I like to write about, so it’s a way for you, the reader, to encourage (or thank) me to write more like I do. Or maybe to expand my horizons. Or maybe you think something on the list is particularly awesome (the order of the items on the list shouldn’t indicate preference, I had to transfer them all over recently from an older Amazon account).

So there it is. My wishlist, it’s a way for you give me a tip if you so choose, or a way for you to just look at my interests if not.

Here’s to many more years of Speed of Light!

Self: The Movie. Here’s a twenty minute video from 1995 recorded by Sun Microsystems about the Self programming language. I’d heard of the language before, but didn’t really know much about it until today.

It’s a very interesting take on a programming environment, and it reminds me a bit of Smalltalk, except taken to another level. A great introduction to thinking of programming environments in different ways.

The New Mac Pro. Guy English:

I keep finding myself thinking about this new Mac Pro. It’s not that I’m lusting after speed of the memory, SSD, CPU, or GPUs for what I do. The more I think about this new Mac Pro the more I find myself wanting to write software for it. To me it’s become the most interesting piece of new hardware since the original iPad. Well, maybe Retina in the iPhone 4 but that didn’t present an entire new class of problems to think about. […]

I keep finding myself thinking up applications for all that compute power and dreaming up what kind of software I could write to take advantage of it.

I think these are the kinds of thoughts developers should be having more often. When a new device comes out, instead of thinking of how the device can improve what we already do, instead try to think of what altogether new sorts of software this device enables.

The difference between 2 images per second and 20 images per second might only be one order of magnitude, but the experience is wholly different because now humans perceive it as motion.

What’s the software equivalent of motion with these new Mac Pros?

Richard Hamming’s “You and Your Research”. From the introduction:

At a seminar in the Bell Communications Research Colloquia Series, Dr. Richard W. Hamming, a Professor at the Naval Postgraduate School in Monterey, California and a retired Bell Labs scientist, gave a very interesting and stimulating talk, ‘You and Your Research’ to an overflow audience of some 200 Bellcore staff members and visitors at the Morris Research and Engineering Center on March 7, 1986. This talk centered on Hamming’s observations and research on the question “Why do so few scientists make significant contributions and so many are forgotten in the long run?”

From his more than forty years of experience, thirty of which were at Bell Laboratories, he has made a number of direct observations, asked very pointed questions of scientists about what, how, and why they did things, studied the lives of great scientists and great contributions, and has done introspection and studied theories of creativity. The talk is about what he has learned in terms of the properties of the individual scientists, their abilities, traits, working habits, attitudes, and philosophy.

I recently read the linked transcript of this talk and thought Hamming gave some good insight into his process, especially the bits about approaching problems from new directions to make them surmountable. Here are some of my favourite bits, but I really encourage you to read through the whole thing:

What Bode was saying was this: “Knowledge and productivity are like compound interest.” Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity - it is very much like compound interest. I don’t want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime.


There’s another trait on the side which I want to talk about; that trait is ambiguity. It took me a while to discover its importance. Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you’ll never notice the flaws; if you doubt too much you won’t get started. It requires a lovely balance.

I think this one is really super important, given the amount of short bursts of time where we distract ourselves with phones and such:

Everybody who has studied creativity is driven finally to saying, “creativity comes out of your subconscious.” Somehow, suddenly, there it is. It just appears. Well, we know very little about the subconscious; but one thing you are pretty well aware of is that your dreams also come out of your subconscious. And you’re aware your dreams are, to a fair extent, a reworking of the experiences of the day. If you are deeply immersed and committed to a topic, day after day after day, your subconscious has nothing to do but work on your problem. And so you wake up one morning, or on some afternoon, and there’s the answer. For those who don’t get committed to their current problem, the subconscious goofs off on other things and doesn’t produce the big result.

On computers specifically:

“How will computers change science?” For example, I came up with the observation at that time that nine out of ten experiments were done in the lab and one in ten on the computer. I made a remark to the vice presidents one time, that it would be reversed, i.e. nine out of ten experiments would be done on the computer and one in ten in the lab. They knew I was a crazy mathematician and had no sense of reality. I knew they were wrong and they’ve been proved wrong while I have been proved right. They built laboratories when they didn’t need them. I saw that computers were transforming science because I spent a lot of time asking “What will be the impact of computers on science and how can I change it?” I asked myself, “How is it going to change Bell Labs?” I remarked one time, in the same address, that more than one-half of the people at Bell Labs will be interacting closely with computing machines before I leave. Well, you all have terminals now. I thought hard about where was my field going, where were the opportunities, and what were the important things to do. Let me go there so there is a chance I can do important things.

On doing great work:

You should do your job in such a fashion that others can build on top of it, so they will indeed say, “Yes, I’ve stood on so and so’s shoulders and I saw further.” The essence of science is cumulative. By changing a problem slightly you can often do great work rather than merely good work. Instead of attacking isolated problems, I made the resolution that I would never again solve an isolated problem except as characteristic of a class.

And finally:

Let me tell you what infinite knowledge is. Since from the time of Newton to now, we have come close to doubling knowledge every 17 years, more or less. And we cope with that, essentially, by specialization. In the next 340 years at that rate, there will be 20 doublings, i.e. a million, and there will be a million fields of specialty for every one field now. It isn’t going to happen. The present growth of knowledge will choke itself off until we get different tools. I believe that books which try to digest, coordinate, get rid of the duplication, get rid of the less fruitful methods and present the underlying ideas clearly of what we know now, will be the things the future generations will value.

Of course it’s my bias, but I see this as being solved in the medium of the computer. I don’t know how, but if a computer isn’t going to knock down this mental wall, it’s at least going to be giving us cracks to wedge in on.

Advanced Alien Civilization Discovers Uninhabitable Planet. According to scientists from the advanced alien civilization, despite possessing liquid water and a position just the right distance from its sun, the bluish-green terrestrial planet they have named RP-26 cannot sustain life due to its eroding landmasses, rapidly thinning atmosphere, and increasingly harsh climate.

“Theoretically, this place ought to be perfect,” leading Terxus astrobiologist Dr. Srin Xanarth said of the reportedly blighted planet located at the edge of a spiral arm in the Milky Way galaxy. “When our long-range satellites first picked it up, we honestly thought we’d hit the jackpot. We just assumed it would be a lush, green world filled with abundant natural resources. But unfortunately, its damaged biosphere makes it wholly unsuitable for living creatures of any kind.”

“It’s basically a dead planet,” she added. “We give it another 200 years, tops.”

Let’s have a good laugh while the air is still good.

We Should All Have Something To Hide. Moxie Marlinspike:

If the federal government can’t even count how many laws there are, what chance does an individual have of being certain that they are not acting in violation of one of them? […]

Over the past year, there have been a number of headline-grabbing legal changes in the US, such as the legalization of marijuana in CO and WA, as well as the legalization of same-sex marriage in a growing number of US states.

As a majority of people in these states apparently favor these changes, advocates for the US democratic process cite these legal victories as examples of how the system can provide real freedoms to those who engage with it through lawful means. And it’s true, the bills did pass.

What’s often overlooked, however, is that these legal victories would probably not have been possible without the ability to break the law.

The state of Minnesota, for instance, legalized same-sex marriage this year, but sodomy laws had effectively made homosexuality itself completely illegal in that state until 2001. Likewise, before the recent changes making marijuana legal for personal use in WA and CO, it was obviously not legal for personal use.

From Here You Can See Everything. James A Pearson:

When I left Uganda this winter I had finally broken the 300-page barrier in David Foster Wallace’s gargantuan novel, Infinite Jest. I’ve started it three or four times in the past and aborted each time for attentional reasons. But 300 pages felt like enough momentum, finally, to finish. Then I hit my first American airport, with its 4G and free wi-fi. All at once, my gadgets came alive: pinging and alerting and vibrating excitedly. And even better, all seven seasons of The West Wing had providentially appeared on Netflix Instant. I’ve only finished 100 more pages in the two months since.

I always binge on media when I’m in America. But this time it feels different. Media feels encroaching, circling, kind of predatory. It feels like it’s bingeing back.

The basic currency of consumer media companies—Netflix, Hulu, YouTube, NBC, Fox News, Facebook, Pinterest, etc.—is hours of attention, our attention. They want our eyeballs focused on their content as often as possible and for as many hours as possible, mostly to sell bits of those hours to advertisers or to pitch our enjoyment to investors. And they’re getting better at it, this catch-the-eyeball game.

Consider Netflix. These days, when one episode of The West Wing ends, with its irresistible moralistic tingle, I don’t even have to click a button to watch the next one. The freshly rolling credits migrate to the top-left corner of the browser tab, and below to the right a box with a new episode appears, queued up and just itching to be watched. Fifteen seconds later the new episode starts playing, before the credits on the current episode even finish. They rolled out this handy feature—they call it Post-Play—last August. Now all I have to do is nothing and moralistic tingle keeps coming.

All the media companies are missing is “Achievements” and we’ll be in full-blown dystopia.

Linguists identify 15,000-year-old ‘ultraconserved words’. David Brown:

You, hear me! Give this fire to that old man. Pull the black worm off the bark and give it to the mother. And no spitting in the ashes!

It’s an odd little speech. But if you went back 15,000 years and spoke these words to hunter-gatherers in Asia in any one of hundreds of modern languages, there is a chance they would understand at least some of what you were saying.

That’s because all of the nouns, verbs, adjectives and adverbs in the four sentences are words that have descended largely unchanged from a language that died out as the glaciers retreated at the end of the last Ice Age. Those few words mean the same thing, and sound almost the same, as they did then.

Your Body Does Not Want to Be an Interface. Jason Brush:

The first real-world demo of Google Glass’s user interface made me laugh out loud. Forget the tiny touchpad on your temples you’ll be fussing with, or the constant “OK Glass” utterances-to-nobody: the supposedly subtle “gestural” interaction they came up with–snapping your chin upwards to activate the glasses, in a kind of twitchy, tech-augmented version of the “bro nod”–made the guy look like he was operating his own body like a crude marionette. The most “intuitive” thing we know how to do–move our own bodies–reduced to an awkward, device-mediated pantomime: this is “getting technology out of the way”? […]

The assumption driving these kinds of design speculations is that if you embed the interface–the control surface for a technology–into our own bodily envelope, that interface will “disappear”: the technology will cease to be a separate “thing” and simply become part of that envelope. The trouble is that unlike technology, your body isn’t something you “interface” with in the first place. You’re not a little homunculus “in” your body, “driving” it around, looking out Terminator-style “through” your eyes. Your body isn’t a tool for delivering your experience: it is your experience. Merging the body with a technological control surface doesn’t magically transform the act of manipulating that surface into bodily experience. I’m not a cyborg (yet) so I can’t be sure, but I suspect the effect is more the opposite: alienating you from the direct bodily experiences you already have by turning them into technological interfaces to be manipulated.

I feel the same way for interfaces like Kinnect or the Oculus Rift. Waving my arms around in the air with no notion of feedback is terribly unintuitive. Nothing in the real world works that way, and it completely ignores all the virtues of human arm and hand. Minority Report and Google Glass might look cool, but they’re farcical at best, and counter-productive to making computers better to use at worst.

Meet The New Boss. My friend Ryan McCuaig goes solo:

The provisional motto is “Computing for making better buildings.”

Like most fields right now, building design and construction has never before had so much data available, and such an uneven distribution of skills and tools that might let us make sense of it and free our thinking for higher-order problems. Even if it were tenable now (i.e., it isn’t), is only becoming less so to waste brainpower on tedium better handled by these infernal yet stupendously amazing machines.

Ryan knows what’s up. If you need someone for a project involving buildings or building things, Ryan is your man.

Bieber’s Boards and Cords. If anyone in the Fredericton area is seeking woodwork, firewood, milled wood or tree removal, Donny Bieber is your man. Be one of the first to see his newly-launched website, designed by me.

My favourite quote is no word of a lie:


A More Comprehensive Google Reader Archive. So it turns out the “Google Takeout” service for Reader doesn’t include everything, but this GitHub project appears to be comprehensive. If you’re really serious about your Reader data, give this a run before July 1 2013.

Things You Didn’t Know About The Battle of Gettysburg. Want to impress your American friends? Allow my illustrious girlfriend, Kate Pais, to enlighten you with some facts about Gettysburg:

The 150th anniversary of the Battle of Gettysburg is upon us. The Civil War and Gettysburg remain one of the most integral and well-documented parts of American history. In hopes of honoring this extra special anniversary, here are ten little known anecdotes about the Battle of Gettysburg, found in the timeless and timely resource The Gettysburg Nobody Knows, an essay collection edited by Gabor S. Boritt.

Douglas Engelbart, visionary who invented the computer mouse, dies at 88. Chris Welch, reporting for The Verge:

Douglas Engelbart, best known as the inventor of the computer mouse, has died at age 88. During his lifetime, Engelbart made numerous groundbreaking contributions to the computing industry, paving the way for videoconferencing, hyperlinks, text editing, and other technologies we use daily.

Engelbart invented the Mouse, the Graphical User Interface, Video conferencing, and Hyperlinking. Before 1968. And we still haven’t caught up to most of his advancements today.

Bret Victor’s Thoughts on Douglas Engelbart. Bret Victor calling out those who wrote about Engelbart’s passing exactly like I did. Bret’s rubbing our noses in it, and making us better because of it:

Engelbart had an intent, a goal, a mission. He stated it clearly and in depth. He intended to augment human intellect. He intended to boost collective intelligence and enable knowledge workers to think in powerful new ways, to collectively solve urgent global problems.

The problem with saying that Engelbart “invented hypertext”, or “invented video conferencing”, is that you are attempting to make sense of the past using references to the present. “Hypertext” is a word that has a particular meaning for us today. By saying that Engelbart invented hypertext, you ascribe that meaning to Engelbart’s work.

Almost any time you interpret the past as “the present, but cruder”, you end up missing the point. But in the case of Engelbart, you miss the point in spectacular fashion.

Our hypertext is not the same as Engelbart’s hypertext, because it does not serve the same purpose. Our video conferencing is not the same as Engelbart’s video conferencing, because it does not serve the same purpose. They may look similar superficially, but they have different meanings. They are homophones, if you will.

I dream of making points as so lucidly as Bret does.

Assorted Engelbart-Related Material

If you’re fascinated with Douglas Engelbart, who died yesterday, here are some things I’ve found to be really great resources on what he set out to do: Augment human intelligence.

His 1962 paper is well worth the read because it lays down the foundations for his work.

Then of course there’s the retroactively named “Mother of All Demos” presentation. The Internet Archive seems to have the highest quality recordings of it. It’s 100 minutes long, but if you’re a software developer who cares about making computers better, consider it a responsibility.

In 2004, Engelbart gave a video interview with Robert X Cringely, talking about how his ideas came to be, and more importantly what he set out to do. It paints Engelbart as a really good soul with an altruistic vision. And he just seems like the kindest person in the world, too.

Finally, here’s a video interview done by Howard Rheingold featuring not only Engelbart, but also Ted Nelson.

Here’s a bonus image of Doug looking like a badass.

Chaim Gingold’s “Toy Chest”. Sounds like a lot of fun:

Last year Lea Redmond and I co-designed a game/installation called Toy Chest for the SF Come Out and Play Festival. Originally we called the game Toy Fight, but found that this wasn’t putting people in an appropriately cooperative/improvisational frame of mind. The basic idea was to design a game (and installation for the exhibit) which would allow players to bring any toy they wanted to a playful contest.

The whimsical absurdity of Optimus Prime going head to head with My Little Pony motivated us, as did finding new ways to play with old toys, and meditating upon material culture. It was also a fun excuse to collaborate, since Chaim mostly makes screen based works, and Lea’s creations tend to be physical three dimensional things.

Denial and Error

Errors have become something of a bad thing, but they need not be that way. Conceptually, an error should be a minor mistake or misjudgement, a simple slight slip-up, but usually hopefully nothing too serious in most cases.

But this is not the world errors live in, because they live in our world, and in our world, errors become something much more grave. Our world is the world of the human, and if you think about it, all errors really boil down to human error at some point. What should be treated as a common wrinkle to be casually flattened out is instead treated as a glaring issue, something alarming which someone needs to be alerted to. Yellow warning signs, boxes erupting from the screen to rub noses in errors, red squiggly underlines pointing out mistaken homophones and finger slips. It gets to the point where errors in the world of humans start to look an awful lot like getting a paper back from a particularly anal high school English teacher. Is it any wonder people are fearful of computers when all they do is evoke tremors of high school nightmares?

This depraved treatment of errors in software should come as no surprise to anyone familiar with current software development. Those who write software are forced to write it for an unforgiving computer, and they are tasked with the grueling edict to coerce every decision into a zero or a one, a yes or a no, a right or a wrong. Is it any wonder the software itself is reflexive of the computer it runs on?

Computer program writers are not the sole source of humanity’s maltreatment of errors, but they are amongst its most vicious of perpetrators, possibly due to a hypersensitivity to the likelihood of errors. A software developer knows the likelihood of errors is high, and an error is a commonplace and usually simple issue, and yet ironically so little seems to be done to actually fix the errors when they happen. Instead, the focus is on preventing the errors, which must be a fool’s errand, becase as we all know, a sufficient number of errors happen nonetheless.

Attempting to prevent errors is natural, but specious logic. Attempting to prevent errors seems natural, because from a very young age, we’re taught errors are bad. Errors are not something that should be corrected, but instead it is the making of errors that should be corrected — we’re taught we shouldn’t make them in the first place, when what we really should be taught is how to learn from them when they happen. Parents tell their children not to cry over spilled milk, but making a mess is a cause for aggravation. A teacher tells students everyone is smart in their own way, and yet those who aren’t smart at passing contrived tests feel bad at their errors. Preventing errors seems natural to us because we’ve had the fear of them driven into us, not because there’s actually anything inherently bad about them.

As Ed Catmul of Pixar said:

The notion that you’re trying to control the process and prevent error screws things up. We all know the saying it’s better to ask for forgiveness than permission. And everyone knows that, but I Think there is a corollary: if everyone is trying to prevent error, it screws things up. It’s better to fix problems than to prevent them. And the natural tendency for managers is to try and prevent error and over plan things.

Software developers are notorious time wasters when it comes to attempting to prevent errors. They’ll spend weeks trying to make the software perfect, provably perfect, all in the name of avoiding errors. They’ll throw and catch exceptions in a weak attempt of playing keepaway with an error and the user, but inevitably all balls get dropped. They’ll craft programming interfaces so flexible, the framework can reach around and scratch the backs of its own hand (this is called recursion). Abstract superclasses, Class factories, lightweight objects (whatever the hell those are), all in the name of some kind of misplaced mathematical purity never reached in the admittedly begrimed world of software development. These dances of the fingers ultimately come down to attempts at preventing errors in the system itself, but they too are in folly, because in the future, one of two things will happen:

  1. The system will change, but the developers couldn’t have predicted in which ways, so all the preparations for preventing this implicit error were incorrect, and need to be fixed anyway.
  2. The system will not change, and so all the preparations were in vain.

At first, it seems software developers treat software as though it were still groves and dots punched into pieces of paper, shipped off to be fed into the mouth of a husky mainframe in another country. Immutable, unmalleable and unchangeable program code, doomed to prevent only the errors its developers could predict. But at least punch cards are flexible. Instead, it seems more like the program code has been chiseled into stone. That’s it. You prevent some errors and punt the rest of them off to the user, to make them feel bad about it.

We deny these errors. We deny them and pass them off to different systems, computer or person. We treat errors as something shameful to deal with and something shameful to have caused. But errors are no big deal. Errors should be expected and be inherent in the design. From a debugging level, errors should be expected and presented to all levels of a development team so that they can always track them down quickly. From an organizational level, errors should be seen as a chance to infer new information about the organization’s strengths and weaknesses. From a user perspective, errors are a chance to explore something off the beaten path.

Errors allow for spontaneity and for exploration. Errors allow for that angular square you went to school with to loosen up and meet some new curves. Errors in DNA created you and me. How can software change if we embrace, instead of deny, errors in our systems?

Pull Requests Volume 1: Writing a Great Pull Request


When I gave my talk at NSNorth 2013, An Educated Guess (PDF) about building better software, one of the points I stressed was about understanding people and working together with them in better ways. This means knowing where you are strong and weak, and where the people you work with are strong and weak, and acknowledging the collective goals you share: to make better software. This means knowing how to deal with other people in positive ways, giving criticism or making suggestions for the betterment of the product, not for your own ego.

I don’t think I expressed my points very clearly in the talk, so I’d like to take some time now and provide something of a more concrete example: dealing Github’s Pull Request feature. This will be a multi-part series where I describe ways to use the feature in better ways, with the end result being an improved product, and also an improved team.

This particular example deals specifically with iOS applications and while using GitHub’s Pull Requests (and assumes you’re already familiar with the general branch/merge model), but I hope you’ll see this could just as easily apply to other software platforms, and other forms of version control. This guide stems from my experiences at Shopify and at the New York Times and is more focused on using pull requests within a team, but most of this still applies if you’re doing open source pull requests to people not on your team.

Writing Great Pull Requests

Let’s say you’ve just completed work on some nasty bug (I’ll be using a bugfix as an example, but I’ll note where you might like to do things different for a feature branch), you’ve got it fixed on your branch and now you’re ready to get the branch merged into your team’s main branch. Let’s start from the top of what you should do to make a great pull request.

Check the diff

The first thing you’ll want to do before you even make your pull request is review the differences between your branch and the target branch (usually master or develop). You can do this with lots of different tools, but GitHub has this built-in as part of the “Branches” page where you can compare to the default branch. Even better, in the “New Repository View” they’re beginning to roll out, reviewing your changes is now part of the Pull Request creation process in the first place.

Here’s where you’re going to look for last minute issues before you present your work to a reviewer. I’ll be writing about exactly what you should be looking for in Volume 2 later on, but the basic gist is: when looking at the diff, put yourself in the shoes of the reviewer, and try to spot issues they might find, before the review even starts. This is sort of like proofreading your essay or checking your answers before handing in a test. If you could include animated gifs on your tests.

Cleanup any issues you spot here and push up those changes too (don’t worry, all changes on the branch will get added to your pull request no matter when you push them up).

Use a descriptive title

The title is the first thing your reviewer is going to see, so do your best to make it as descriptive as possible, as succinctly as possible. Don’t be terse and don’t be verbose, but give it a good, memorable title. Remember, depending on how the team works, the reviewer might have a lot on their mind, so making it easier to tell apart from other pull requests at a glance will make things easier for them to review yours.

Some teams assign ID numbers to all features and bugfixes from issue tracker “stories”, and if your branch has one associated with it, it’s also a good idea to include that somewhere in the title, too. This allows software integration between Github and your team’s issue tracker software so the two can be linked together. Also, including the ID number in your pull request title helps eliminate the chance of ambiguity. If the reviewer really isn’t sure which issue the branch relates to, they can always use the ID number to verify.

Finally, try to be explicit with the verbs you use in the title. If your branch fixes a bug, use the word “fixes” or “fixed” somewhere in the title. If it implements a feature, use the word “implements” or “adds”, etc. This tells the reviewer at a glance what kind of pull request they’re dealing with without even having to look inside. As a bonus, when using an ID number as discussed above, some issue trackers will automatically pick up on key words in your title. Saying “Fixes bug #1234” can cause integrated issue trackers to automatically close the appropriate bug in their system. Let the computer work for you!

The Description

The last and most important thing you need to do to write a great pull request is to provide a good description. For me, this involves three things (although I’m open to hearing more/others), loosely borrowed from a Lightning Talk given by John Duff at Shopify. In a good description, you need to tell the reviewer:

  1. What the pull request does.
  2. How to test it.
  3. Any notes or caveats.

1. What it does

This is the most essential part of the pull request: describing what the changes do. This means providing a little background on the feature or bugfix. It also means explaining, in general terms, how the implementation works (e.g. “We discovered on older devices, we’d get an array-out-of-bounds exception” or “Adds a playlist mode to the video player by switching to AVQueuePlayer for multimedia”). The reviewer should still be able to tell what’s going on from the code, but it’s important to provide an overview of what the code does to support and explain it.

The real benefit of doing this is it gives your team a chance to learn something. It gives anyone on your team who’s reviewing the code a chance to learn about the issue you faced, how you figured it out, and how you implemented the feature or bugfix. Yes, all of that is in the code itself, but here you’ve just provided a natural language paragraph explaining it. You’ve now created a little artifact the team can refer to.

As an added bonus, when writing it up yourself, you’re also taking the opportunity to review your own assumptions about the branch, and this might reveal new things to you. You might realize you’ve fallen short on your implementation and you’ll be able to go back and fix it before anyone has even started reviewing it.

You’re the project’s mini-expert on this mini-topic, use this as a chance to let the whole team improve from it.

2. How to test.

Some development teams have their own dedicated QA teams, in which case providing testing steps aren’t usually as essential, because the QA team will have their own test plan. If your team does its own QA (as we did at Shopify) then it’s your responsibility to provide steps to test this branch. That includes:

  1. The simplest possible way to see a working implementation of whatever you’re merging in.
  2. The most complicated possible way to see a working implementation.
  3. Tricky edge cases, especially ones whose intended behaviour differs from the main use case.

If your branch fixes something visual in the application, it might be a good idea to provide some screenshots highlighting the changes. If your branch involves a specific set of data to work with, provide that too. Do what it takes to make it easier for the reviewer.

Of course, the reviewer should be diligent about testing this on their own anyway (in steps I’ll describe in Volume 2), but when you provide steps yourself, you’re again reviewing the work you’ve done and possibly recognizing things you’ve missed, cases you’ve overlooked that might still need work once a reviewer checks them over. This is another chance to remind yourself of strange test cases you might have not thought about.

3. Notes and Caveats

The last section doesn’t always need to be included, but it’s a good catch-all place to put extra details about the branch for those who are curious. It’s also a great place to explain remaining issues or todos with the pull request, or things this branch doesn’t solve yet.

If you’ve made larger changes to the project, this might be a good place to list some of the implications of this change, or the assumptions you’ve made when making the changes. This again gives the reviewer more clues as to what your thinking was, and how to better approach the review.

Assign a Reviewer

If it makes sense for your team, assign someone to review the pull request. When choosing a reviewer, try to find someone who should see the request. Who should see it? It depends. If you’re fixing an issue in a particularly tricky part of the application, try to get the person who knows that area best to review it. They’ll be able to provide a more critical eye and find edge cases you might not have thought of. If you’re changing something related to style (be it code style or visual style), assign the appropriate nerd on your team. If you’ve got an issue that would reveal and explain the internals of a part to someone who might not know them as well (like a newcomer on the team), assigning them the pull request will give them a good chance to learn.

This doesn’t necessarily mean assign it to the person who will give you the least strife, because sometimes strife is exactly what you need to improve yourself and the code (if the reviewer is giving you strife just for their own ego, that’s another story which I’ll discuss in Volume 2).

Remember, if the reviewer finds issues with your code, it’s not personal, it’s meant to make the project better.

Not always applicable things

Some of the suggestions I’ve made will seem like overkill for some pull requests, and in a way that’s a good thing. They’ll make less sense for smaller pull requests, where the changes are more granular or uninvolved — and these are really the best kinds of pull requests, because they change only small amounts of things at a time and are easier to review. But sometimes larger changes just can’t be avoided, so that’s where these suggestions make the most sense.

Building Better Software by Building Software Better

These are tips and guidelines; suggestions for how to make it easier for the people you collaborate with. Once you think of Pull Requests as really a form of communication between developers, then you see it as an opportunity to collaborate in better ways. It becomes less about passing/failing some social or technical test, and more about improving both the team and the project. It’s a chance for all parties involved to learn something new, and do so in a faster way. It’s not about ego, it’s about doing better collective work.

In Volume 2 I’ll do the same from the other perspective: Giving Great Pull Request Reviews, to see how it looks from the other side.

In visualization, focus on what matters. Alberto Cairo on deciding what to do with the short time we have on this planet:

How many visualizations of flight paths, languages on Twitter, Facebook friend networks, and votes in the Eurovision song contest does the world need? I’d argue that not that many as we see nowadays. On the other hand, how many graphics about inequality, poverty, education, violence, war, political corruption, science, the economy, the environment, etc., are worth publishing? I’ll leave the answer to you, but you can guess what mine is.

I know of no research to back me up on this, but my guess is that visualization designers are, on average, nerdier and more technophilic than your average Jane and Joe. When we have the freedom to choose what topics to cover, we tend to lean toward issues most people don’t care much about, but that we consider fun and cool. Besides, we tend to focus in areas in which data are easily available, and arranged in a neat way —Internet and social media usage are the obvious examples, but there are many others.

I’m not sinless, by the way. Between January and May 2013, I oversaw a visualization project by a Spanish student, Esteve Boix, which I’ve described in detail in my website. Its topic was Buffy, the Vampire Slayer, Joss Whedon’s geeky TV show. A portion of my soul —the one that remains stuck in a Dungeons&Dragons and comic book-filled adolescence— was enthralled. The other side —the adult, emotionally hardened one— wondered if the energy spent in timing the appearances of characters in the show and other trivial minutiae could not have been better spent in more worthwhile endeavors.

Chapter One: Little and Big Things

It was a hot and smoggy sunny day in Brooklyn. Very hot. I mean it was like somebody had doused a planet with gasoline and then lit it on fire. That kind of hot. I stood in the middle of a sidewalk, beside one of the few lush parks in the city. I had stopped dead in my tracks because I just had to tweet something witty. I’m so witty.

So there I was just standing there, with my new (well, pretty new) black iPhone 5 held precariously in my hand. In my claw. The iPhone 5 was held precariously in my claw hand, because it’s kind of a little too big anyway, but I get an extra row of icons so that’s really nice. The glare from that fat old sun off my iPhone screen is almost unbearable (Unglarable? Perhaps I’ll draft that as a tweet for later). I’m holding the phone at around waist level, with my head tilted down. From behind, I know this pose looks quite a lot like a man using a urinal, but I figure since I’m standing like this in public nobody will probably care.

I’ve already made my tweet and I’m just standing there, feverishly pulling to refresh. Thank god for that gesture, I mean the iPhone 5’s screen is nice but if I had to reach all the way up to the top of that screen every time I wanted to make my Twitter feed refresh, my thumb would probably fall off. Not to mention the fact that the refresh buttons are almost always on the right hand of the screen. It’s kind of descriminatory to lefties like me, but the pull to refresh gesture is an equalizer. It’s a real innovation, really.

I’m pulling and refreshing because I just know the tweet I’ve made will set some people off, and I’d really like to know what they have to say back to me. The people who follow me on Twitter are witty like me too. You have to be, because you’ve just got to be focused if you want to make an impact on Twitter. This tweet will probably get me so many Favs, too. I’ll check my email, because I know Twitter will email me when someone retweets me now too. Nothing yet.

“What’s up there doods?” I hear from behind, clearly aimed at me, because it’s one of the phrases I use when talking to my close childhood friends (we’ll always like to poke fun at Metallica’s Lars Ulrich, who seems to have a good sense of humour). I don’t recognize the voice though, so I turn around.

Standing behind me is a young kid of of probably twelve years old. He’s standing in my shadow, so as my pupils adjust to the change in lighting, I conveniently start to see more of his traits. He’s skinny. So skinny (is he sick? is he eating enough?). OK. Not that skinny. I was like that as a kid, too. He’s got messy brown hair, kind of curly, but really just messy. The wind hasn’t even been blowing in Brooklyn today because it’s too hot for even that, so his hair must just be messy. He’s wearing round glasses on his somewhat broken out face, and a t-shirt with cartoon characters I don’t quite recognize (the writing on it looks Chinese, which I happen to not be able to read, but I have an app on my phone that’ll translate it for me). He’s got rather quite large Adidas on his feet. They look like worn out flippers because he clearly hasn’t grown into his feet just yet. They’re awesome sneakers, though.

Of course, all of these descriptions really happen as thoughts inside my head in less than 500 milliseconds and I have no idea how they actually work. I can’t conceive of my own brain.

“Uh, hey kid” I say with a half smile, because remember, I just turned around like a second ago, so it doesn’t seem like there was a weird gap or anything. “Not much I guess” I’m making conversation but I feel a little out of place. Grownups aren’t supposed to talk to strange kids, especially not near a park. “Do I know you from somewhere?”

“Yes you do. I’m you. You’re me” he says without beating around the bush. I find it kind of hard to believe, because if I were about to reveal that, I think I would have tried for a little more dramatic tension.


“I’m you. I’m a younger version of you. I travelled through time to come talk to you.” He did look kind of familiar, now that I think about it. I don’t really believe him, but I’d just re-watched “Back to the Future” a few nights ago, and so time travel was still on my mind. I thought I’d humour him. It’d make for a great story, if nothing else. People tell me I’m good at telling stories.

“Well little Jason,” I say to myself, sounding more patronizing than I’d intended, “I don’t remember travelling through time when I was your age. Shouldn’t I have remembered travelling through time and meeting myself in the future?” I know a thing or two about the implications of time travel.

“Probably, but you don’t remember because you haven’t done it yet. I didn’t come from the past, I came from the future,” little Jason said. OK, that doesn’t make any sense, I thought.

“OK, that doesn’t make any sense,” I said. “You’re what, twelve right?”


“OK, so if you’re me and you’re eleven and I’m me and I’m twenty-five, how did you come from the future?”

“Things don’t always make sense to the past,” he said. “Some things seem unreasonable to generations, but later we learn they’re wrong. And it’s hard to teach that to the past, but with time machines, it’s a little easier”

“Sure, but I still don’t see how that’s possible, even if you had a time machine.” I was more curious than incredulous.

“OK, let’s take an example they teach in kindergarten. You’ve got two metal balls of the same size, one weighs four pounds, the other weighs two. You drop them both at the same time from the same height and see which hits the ground first. In old times, people used to think the heavier ball would fall faster, because it sorta makes sense. They couldn’t understand how both would fall at the same rate”


“You could show them, but that really wouldn’t convince them. Believe me, I’ve tried. But here’s the trick. Here’s how you get them to understand something that seems impossible. And it doesn’t always work right away, and it’s not an easy trick, but here’s what you do. You don’t convince them of anything, but you instead get them to convince themselves that it’s true.”

“And how do you do that?”

“You take another two pound ball, and you tie it together with the other two pound ball. The tie weighs nothing, so now you have a new shape that weighs four pounds. It’s made of two, two-pound shapes. How could it possibly fall any slower than the other four-pound shape?”

“Wow. Hmmm. That’s a neat way to look at it.” OK. He had me there.

“But I didn’t come here to talk about balls, Jason,” said little Jason.

“Well so to convince people of the past of these seemingly impossible things, you’ve got to get them to convince themselves. Sure, that makes sense. But what still doesn’t make sense is how you came from the future and yet you’re younger than me. You haven’t got me to convince myself of that yet”

“That’s what I came to talk about,” he said with a grin. “I can’t do it yet. I can’t actually get you to convince yourself that I’m you and you’re me yet, because you need to invent it. The tools you need to reason in that way just don’t exist yet.”

My phone in my pocket buzzed twice. This probably meant I’d gotten an email about a retweet. Yes.

“You’re telling me you travelled through time to convince me to make a tool to convince myself that you in fact did travel through time?” To say the least, I was little perplexed.

“Clear as mud?” His face told me he understood this clear as day. Meanwhile I understood this about as clear as the smoggy Brooklyn day.

“It seems like a pretty roundabout way to get things done.”

“Convincing yourself is just one implication of what you need to do. It’s a bonus, a result, but it’s not the goal you’re after. It’s not the goal we’re after.” he said to me. I said to me? “Don’t worry about the time travel details. Don’t worry about how I got here, or how old I am. That’s not important. What’s important is what you need to do. You can make things better.”

“Am I not doing that already?”

“You are, but you can be doing better. As generations go on, society as a whole learns more and more. They get smarter than those who came before them. They create new art, new forms and expressions, new tools to help them reason. These things extend our reach and let us think new thoughts we couldn’t possibly think before. They take the good and spread it around to everybody so everybody grows up in a better world.”

“This is pretty deep for an eleven year old. You sound like you know a lot.”

“Compared to the eleven year olds of today, I do. Because the time I live in, people have more tools at their disposal, they can think in more powerful ways, because they have tools to help them imagine new things. We’re running on the same brains you have here, but we’ve got help from our inventions. I need you to help invent those.”

“How do you suppose I do that? I’m don’t think I’m as enlightened as you seem to be, kid,” I said.

“And that’s my whole point. You’re not enlightened now, but you’ve got to start. Make a tool, reason better, so you can make a better tool, and reason even better. And so on. That’s how it goes. You start slowly, for now you’ll have to make ‘software’, I guess. It’s the best tool you have in your time.”

“I already do that. I make software for a living, you know.” It’s my job and I’m quite proud of it, I thought to myself. “And besides, software is for networking and photos and news and videos anyway,” I said.

“You’ve got the right skills but you’re doing terribly limited things with them, you know. Your software pulls information from one computer and shows it on a smaller computer, and sometimes it sends it in the other direction. Who is doing higher reasoning with that?” That hurt. “It’s like you’re a really great drawer, but all you draw are stick people. That’s great and there’s a time for that, but there’s so much more to the world. You could discover some of the really big things my time is based around. You can’t even see it yet. At all. But you can get there.”

I felt like I was going to collapse. Maybe it was the heat. Maybe it was what little Jason had just told me. Maybe it was what little Jason had just convinced me of. My phone buzzed furiously in my pocket.

“Leave your phone, it’ll wait. You can’t explain this in a tweet. You can’t explain this on your website, although I’m sure you’ll try. You won’t be able to explain it with your software, just quite yet, but you can get closer,” he said.

“Do you ever get the feeling like there’s something you’re missing? Like something standing right in front of you, but it’s invisible? You can feel the wind, but you can’t see it and you don’t know what the air is, but you know it’s there.” I could feel the air.

Little Jason smiled. “Chase that feeling. Humans figured out what the wind was. We built microsopes to see things our eyes couldn’t. And you can make tools to help us think things our brains couldn’t.”

My phone stopped buzzing.

“I’m young and this is the future I want to inherit” he said.

He said goodbye and hopped on his bicycle. His meager little legs pedalling with ease, he biked a lot faster than I could run.

Addition, Multiplication, Integration

How do we build software faster?

I’ve been grappling with this question for a while now, because I’m just full of ideas, new things I want to try, and I’m held back with the speed I write software at. I look back through the history of software, or of any technological development, and I realize: this need not be so!

We can write software faster, we’ve been doing it all along. Things that used to take long and arduous periods of time to write are done quicker now, to an extent. But I want to do better, even faster, and I want to improve everything along the way. I don’t want my ideas to be limited by unnecessary bottlenecks.

Why the hurry?

With a fulltime job and a fulltime personal life, I give myself about one hour per day to work on my own personal projects. I start my mornings by coding for an hour or so, on something I really care about before I head off to work (it’s a great habit to be in, by the way, because it starts the day off on a really positive note, leaving me energized for the rest of the day). Being limited to one hour per day forces me to be focussed on my work, too. Although one hour per day isn’t a lot of time to devote to all the projects I want to get through (at last count, I’ve got about 10 in the backlog), it’s really a hard limit for me now. So if I want to write software faster, something else has to give, and that’s got to be from the software itself.


The most obvious way to work faster is to build on the work of others, and for software developers the most obvious way to do that is to use components built by other developers. Open (or closed) source projects and objects created by other developers is a fast way to add new things to a project so that I don’t have to. I can depend on libraries written by others, and I can’t depend on others, but I’ve been at this long enough I’ve developed a keen eye for telling the good from the bad.

According to most developers, we’ve created a solution to allow for better sharing of code. We’ve got object oriented systems, and we’ve got repositories full of them strewn across the web. GitHub creates a social network around them, and package managers allow us to install them at a whim to our projects. I can’t help but feel these solutions all fall tremendously short, for reasons I’ll detail in a later article.

Suffice it to say, using components from other developers does improve how fast we can write software. It’s like a form of addition. Work on my project has been added to for free.


If adding components from other developers is like addition, it follows then that I can work even faster if I work with other developers. This is like multiplication, a collaboration where we collectively work harder because there are more of us doing the work.

I’d even consider something so superficially simple as bouncing ideas off another developer to be a form of collaboration, because it allows me to get outside of my own mind.

Derivation and Integration

The ways I’ve discussed so far are all helpful methods of building software faster, and I’ve been using them recently to great success, I know there is an even better way, in addition to those already mentioned. This way sort of transcends all the other ways and affects my ability to write software at all levels.

How do we really get faster at making software? Well we have to eliminate the bottlenecks and by far the biggest bottleneck for any software developer is thinking. Thinking is what takes up the vast majority of our time. I think there are two main kinds of thinking a programmer has to do, one good and one bad. We need to create new ways to shift the balance in the good direction:

  1. Good thinking is thinking about the core problem trying to be solved. This means things like the overall problem, the algorithms needed to model a system, the intention of the user the developer is trying to meet, etc. these things are necessary to think about, they’re what people want to solve, but they are encumbered by…

  2. Bad kinds of thinking is not about bad thoughts, just counterproductive ones. These are the “tricky bits” where the developer is forced to think about implementation problems, think about the minutiae of either the system or how to program it.

    These are things like bugs that need to be understood and fixed, but also trying to reason in unfamiliar or unintuitive ways (for example, thinking in higher order dimensions, dealing with non-human scales, changing coordinate spaces), anything that causes a person to slow down and have to reason about something before they can continue onto their “good/real work.”

If code sharing is addition, and working with others is multiplication, then this becomes like a form of differential calculus.

I want us to contribute to tools that will minimize the amount of time we spend dealing with the rough kind of thinking. I want those sorts of details to be trivialized, so that the good kind of thinking becomes more natural. That’ll let us spend more time on the problem domain, which will open up all kinds of new ways of thinking.

I’m not exactly sure what those sorts of tools look like yet, but I have some hints to help me find them (and I’d love to hear how you think we can find them, too):

  1. Any time you’ve got to draw something out on paper (geometry, visualizing data or program flow, etc.) is probably a good hint we could develop a tool to help reason about this problem better.

  2. If it’s a common source of bugs for programmers, this is probably another candidate (off-by-one errors, array counting, regular expressions, sockets or other kinds of stream-bound data, etc.).

  3. Any time where it takes a lot of tries and tweaks to get something just right. If you’ve spent too much time trying to tweak or visualize how an animation or graphic should look, this might be a great place to look for creating a tool for better reasoning about it.

I’ve got some preliminary work done on this, but nothing I’m ready to show off quite yet. In the meantime you might like to check out the Super Debugger, as a crude attempt until I’m ready.

These certainly aren’t the only pain points we software developers need help reasoning about, but it’s a start. And that’s just for software developers. I’ve completely left out everybody else. Every physicist, every teacher, every architect, every doctor, every novelist. These are all domains which could benefit greatly from new ways of reasoning to help them do their work better. But I think we should start with the problems we’re having before we can begin to help anyone else (hint: we shouldn’t be solving anybody’s problems, we should be giving them the tools to create their own way of thinking and reasoning better. It would be presumptuous to assume we software developers knew how to fix the world’s ills. But we can enable them to do it.).

Pull Requests Volume 2: Giving Great Pull Request Reviews


This article acts as a compliment to Pull Requests Volume 1, which focused on writing great pull requests. Now I’m going to focus on the other side of things, how to do a great job as a reviewer for a pull request.

Like in the first article, I’m going to write this from the perspective of a reviewer on a team of developers working on an iOS app, using GitHub’s Pull Request feature. However, many of the things I’ll discuss apply equally to any kind of software and any kind of version control system. These guidelines are based off my experience working at Shopify and The New York Times.

The overarching theme behind both of these guides is to give examples to help software developers work better together. I’ve seen too many examples of ego getting in the way of quality. Software developers are professionals, but we often have difficulty with social things, especially interpersonal issues. That’s not only limited to how one developer gives criticism, it also includes how another developer takes criticism.

The important thing to realize here is when reviewing code, as in doing any professional activity involving developers, is this is not supposed to personal. It’s not about making one person feel good or another feel bad. It’s about improvement, both for developers and the software they make. Keep these things in mind when you’re acting as a reviewer, or when you’re receiving feedback on a pull request you’ve made. It’ll make you both better developers.

You are the Gatekeeper

As a code reviewer, you are acting as a Gatekeeper for you application’s codebase. No matter what you do while reviewing a pull request, the end question has to be: “Does this improve or worsen our codebase?” Your mission is to not accept the pull request until it improves the project, but how you accomplish that varies from team to team and project to project. Here are the things I think are most important to an iOS project, and most of these things will apply to any kind of project.

Read the Description

This one should be so obvious it’s almost not worth mentioning, but it’s important enough to still warrant being said: as a reviewer your first task is of course to read the description of the pull request as provided. Hopefully, the developer you’re working with wrote you a detailed one. This is the step where you become familiar with the issue that’s being solved. You might need to read up on a specific issue or story in your company’s issue tracker to do this, or it might be something so simple it was explained fully in the description itself.

Verify it

The next thing you should do as a reviewer is Verify the pull request accomplishes what it has set out to do. How you do this depends on how your team works.

In the most basic case, this means reading the source code, hopefully guided by the description provided, looking for how the code works. You’ll want to look at the code difference to see what was deleted and what was added. Here’s where you can spot any immediate issues, like an edge case clearly missed, or other common problems like incorrect use of comparison operators (I’ve been guilty of many more lesser-than-or-equal-to bugs than I’d like to admit). Reviewing pull requests requires a keen eye for things like this.

For teams who do their own QA, this is also the time where you’ll do your testing. This means checking out the code locally, running it, and following the test cases provided by the developer. In the best case, the developer has provided lots of cases for you to test against, but here’s where you might find your own too. Good things to look for here include strange input (negative numbers, letters vs numbers, accents and dìáçrïtîcs, emoji, etc.), multitouch interaction, device rotation or view resizing, device sleep states and foreground vs background issues, to name but a few.

If the project has unit tests, make sure the pull request tests all new functionality as needed, and that all the tests pass. GitHub has recently introduced a pull request feature to integrate with build servers, so the tests can even be run automatically before the pull request is merged in, but if not, you can always run them locally.

Above all, you want to make sure this code does what it intended to do and doesn’t introduce any new problems.

Code style

While you’re looking at the code, you should be checking to see if it conforms to your project’s style guide. Even if your project doesn’t have an explicit style guide, you probably have a good idea of the general app style in your head (and you should still consider creating an explicit guide).

Don’t be afraid to be diligent here, because even though any style violations may be minor, pointing them out isn’t petty. They add up. There’s nothing wrong with pointing out issues with whitespace, brace location, naming conventions, especially not when there are multiple slips. Both parties should be aware fixing these issues helps makes the code more coherent and consistent for everyone going forward.


Make sure the pull request includes proper accessibility for all new interface elements. This is really important to do as you build your application from the start because it can be done incrementally, and your customers will thank you for it. It’s simple enough to build the bare minimum accessibility into your app this way, but if you want to do a stellar job, consider talking to Doug about it.

Any pull request that ignores accessibility features should not be merged into your project until the omissions are fixed.


Much like accessibility, you should reject any pull request that doesn’t localize user-facing text in your application. This doesn’t mean the request has to include translations, it just means that any string added to the app for user-facing purposes should be localizable.

Even if your project does not currently offer localizations for non-English locales, every pull request should still include this, so your project can be expanded to include other locales at your whim. Pull requests not including localized strings should not be merged in until they’re fixed.


This is one we’ve started doing recently on our team: every pull request that introduces new public API needs to have that public API documented. Since a forthcoming version of Xcode is going to include Headerdoc and Doxygen doc-generation built-in, we figured it was a great time to start writing docs formatted with them (we chose Headerdoc because it seems to be what lots of Apple’s headers are already documented with, but since both formats are supported, it matters less which you pick and more that you are consistent with your choice).

It’s senseless to include docs for every bit of code in the app, so we’ve set the bar at only methods and properties in our public interfaces. Private methods don’t make a whole lot of sense to document, generally speaking, because they’re often subject to change or are too internal to warrant the effort (although there’s really nothing wrong with documenting private methods, either).

Forcing a developer to include documentation for their API forces them to think more lucidly about what their API does, what parameters it takes, and how it works. In the course of writing documentation, I’ve realized once I “read it out loud” that I’d make code needlessly complicated, and immediately figured out a better way to write it. All this just by trying to explain in documentation what the code does.

As a reviewer, you should reject any code that doesn’t live up to this standard. As new code gets merged in to your project, more and more of it will be documented. New developers will be able to quickly learn how your API works, and new code will be written more clearly as well.

The Dings

The above list is a list of things either I’ve been dinged on before while having my pull request reviewed, or things I’ll ding other developers for before I’ll merge theirs in. It’s not a list of demerits, they’re not errors you should be ashamed of, they’re just common pain points, things that should not be merged into a project. You shouldn’t feel bad about mentioning any of these, just as you shouldn’t feel bad if you’re “caught” for one of them either. But if the need arises, don’t be afraid to attach an animated gif summing up your feelings.

You’ll be a happier and better development team for it.

Peaks And Troughs. Paddy O’Brien

This put me in mind of something I’ve always felt with respect to my own education, that is to say I like taking beginner classes and reading beginner articles. But I have had trouble putting the reason into words.

Chances are you’ve had a lot of teachers. Stop and think about it. I have been to one junior high, three high schools, two colleges, and two universities and would not care to estimate the total number of teachers because odds are the estimate would be too low.

Right from the time you left grade school and entered junior high it has been a different teacher/educator/mentor/guru/wikipedia editor/your title here, for everything you have undertaken to learn and at every level of expertise. Each one of these educators has spoken from a different base of experience.

In the Spirit of Beginning. Fellow NSNorth speaker Caroline Sauve:

Beginnings are full of anticipation and promise. Full of potential and opportunity. Looking forward to something new and revealing, beginnings are full of creative energy.

Apple, why are you toying with me via -subviews? (mutability vs immutability). Ethical Paul gets mad about UIView’s -subviews array so you don’t have to:

You want to get a view’s subviews and enumerate them.

Maybe you also want to remove some of them from their superview while you do that, why not?

But you have a slight misgiving because what if that would constitute mutation during enumeration?

More Thoughts on “Addition, Multiplication, Integration”

I’ve been thinking a lot about the essay I published last weekend, “Addition, Multiplication, Integration”. In it, I laid out the basics, the vapours of a conceptual framework for building software faster (although the more I think about it, it might not just be about faster, but also about better). The gist of it was:

  1. Building on the work of others with shared/open source code is like Addition.

  2. Building software with others, collaboratively, is like Multiplication.

  3. Building new tools to help us reason about and trivialize the tricksy problems, those beyond our current abilities to easily juggle in our heads, is like forms of Integration and Derivation.

    This was the really important part of the essay. Software developers so often get caught up on the trivial, yet devilish bits of writing programs, where they’re either facing common mistakes and bugs, or they’re facing things they can’t easily think about (dealing with higher dimensions, visualizing large amounts of data, memory, large computations, etc.). By building new tools to help us reason about and truly trivialize these sorts of problems, it should have the effect of making these problems less of a roadblock, and so we can work faster.

    I chose the Integration/Derivation metaphor because not only is that a useful way to arrange things, it also works as sort of a compactor, squishing a higher dimension into a lower one, and spreading the details around. Tricky problems which once occupied an entire plane sprawling in both directions can, once trivialized with tools, be squashed down to a single point. Once an immense vastness, now finite and graspable. This is kind of like Bret Victor’s Up and Down the Ladder of Abstraction

What I’ve been thinking more about since publishing it, and where the arithmetic and calculus metaphors break down, is that these “levels” aren’t really mutually exclusive but instead they feed and fuel each other. Using open source tools and working with others helps build new tools faster, the kinds of tools described in the third level, for solving trickier problems. Well, once those trickier problems are solved, then it can allow us to create more open source code, where those problems are solved. That helps us work better together, too. It’s a positive feedback loop felt throughout the system.

The question now is, where do we start?

A Vision in Negative Space

Inventions and Visions

I look at all my computing heroes and I see many of the great accomplishments they’ve made. I see many great inventions they created and gave to all of us. I see how they’ve enhanced computing for the betterment of all, and I’ve been trying to find a way to contribute in a meaningful way. I think to myself, “If I could have invented just one of the things they’ve made, even if it took me a lifetime, I’d be happy.” I look on in astonishment and I can’t conceive of how they made their great inventions. At least not until recently.

What I’ve come to realize is all the heroes I look up to, all their inventions weren’t created for their own sake, but were instead created along the road towards a Vision. Doug Engelbart might have helped invent the mouse, hypertext, and collaborative software. Alan Kay might have helped invent the modern Graphical User Interface, the laptop computer, and Object Oriented Programming. But none of these things were inventions for their own sake: they were simply the natural fallout of the vision these people were working with.

Doug Engelbart didn’t set out to invent hypertext, he set out to Augment Human Intellect, and creating a form of non-linear text navigation was just a natural consequence of this vision. Of course he invented hypertext, there’s no way he could have avoided it on his journey.

Alan Kay didn’t set out to invent the Smalltalk programming language, he set out to create a democratized Personal Computer for Children of All Ages, where every part of the system was malleable and executable by any user. Smalltalk was never the goal, it was “just” (emphasizing because in reality, it’s of course a tremendous technical achievement) a vehicle to the next step in the vision.

Seymour Papert didn’t set out to build an electronic turtle and the Logo programming language to power it, he set out re-imagine what education looked like when the flexibility and dynamic behaviour of the computer was allowed to play a starring role in how a child learned to think and reason. The LOGO programming language wasn’t the target, instead, it was an arrow.

Doug Engelbart had a vision to augment human intellect.

Alan Kay has a vision to democratize computing and create a more enlightened society.

Seymour Papert has a vision to unshackle education from the paper and pencil, and create a society fluent in higher-level mathematics and reasoning, enabled by the computer.

Bret Victor has a vision to “invent the medium and representations in which the scientists, engineers, and artists of the next century will understand and create systems.”

I’ve spent so much of my life with my eyes and mind keenly focussed on the inventions of others, blatantly ignoring the purpose of those inventions. It’s like Shakespeare is trying to tell me a story and I’m marvelling at his pencils.

I get so caught up on the inventions themselves I can’t possibly fathom how I’d ever invent anything of that kind of magnitude. But I’m looking at it all wrong. If necessity is the mother of all invention, then I need a necessity. Inventions aren’t the point, they’re just the fruit that falls out of the tree as it reaches to the sky.

Negative Space

I don’t have a vision.

I need a vision.

While I have lots of goals, both short and long term, I consider those separate from a vision, because a goal implies there’s an endpoint. I think with a vision, it’s an on-going thing, with a target forever challenging you to keep moving forward.

I don’t know what my vision is, but until I figure that out, I can look at what it is not. Maybe by carving out the negative space around it, I’ll be able to form one in what’s left behind.

A Vision for What I Don’t Want

In twenty or fifty years, what I don’t want is for people to still be using “apps.” Computer programs as isolated individual little packages, operating independently and ignorantly of one another is not something I want to see in my future. I don’t want computer software to continue to be a digital facsimile of physical products on a store or home shelf.

In twenty or fifty years, what I don’t want is for software to be coded up exclusively in textual formats, which are really just digital analogues of paper punch cards. I don’t want to have to type in code in basically a text editor, have some compiling program spit out a binary, and then have the system launch the software once again for the very first time, leaving me to imagine what the program is doing. This is an antiquated way to build software, and it has no place in my vision.

In twenty or fifty years, what I don’t want is for professional software developer to be a common, mainstream job like it is today. There are so many great minds in every field in the world, from the sciences, to medicine, to finance, to families, and they’re all at the complete behest and will of professional software developers. A scientist or artist cannot create their own digital tools, as the world exists today, and instead must rely on software developers. This has no place in my vision. I want every person to have control over what they can make on a computer, so much so that it puts software developers like me out of a job. There might be a few of us left around, for things like low-level systems programming, but otherwise, I don’t want my job to exist.

Finally, in twenty or fifty years, what I don’t want is for children to grow up in the same world we have today. I don’t want the education system to continue to ignore the computer for its true capabilities, and instead cling to teaching everything as if we only had paper. I don’t want children to be manipulating “2x + 4 = 10.” I don’t want them to be trapped by paper, but instead I want them to have fewer restrictions on their imaginations. I don’t want them to think of computers as binary beasts of “yes or no”, “right or wrong”, but instead as a digital sandbox where errors and mistakes and messiness is encouraged and explored.

I don’t want my children to grow up in the world I grew up in. I don’t want their education to be the same, I don’t want their environment to be the same.

Encircling a Vision

All of that is to say I’m trying to find what I want to work on for the rest of my life. I’m trying to find a driving force, an inferno and dynamo which will power me and propel my work. Levers and pulleys are wonderful things, but they’re artifacts to help me on my way. I’m trying to build a civilization.

Poetreat Review and Interview

Poetreat is a delightful iOS application by developer Ryan Nystrom designed to do one thing well: Let you write poems and discover rhymes.

The app has a simple but beautiful user interface, which starts with its colourful feathered icon. The main interface should look familiar, yet unique, and fits in well with standard iOS apps. You’ll know how to use it.

From the main interface, you can start typing your poems as you’d type anywhere else, but Poetreat analyzes your text as you type, and helps suggest rhymes for you. It doesn’t just suggest them anywhere though, it also has a sense of the semantics of your poem: it keeps track of the structure of your poem to provide rhymes when you need them according to its metre (things like ABAC structure, for example).

Poetreat is free in the App Store, with a $0.99 In-App Purchase to unlock iCloud syncing and custom themes.

The app really caught my eye when I realized what it did about recognizing the structure of your text, so I asked Ryan if he’d be interested in an interview. You’ll find our conversation below:


Jason: So first off: What’s in a name? How did you come up with the name for Poetreat?

Ryan: The name Poetreat came out of the blue. The app idea was always set in stone, but the name was really tough. I wanted to find something unique that conveyed what the app was meant for. I never intended the app to be used for poems longer than 8 or so lines, so I was thinking along the line of “snippet” or “piece”. After a late night brainstorm I opened the pantry to grab a snack, and that’s when it hit, it was a “treat”. I really liked the word because its supposed to be a small portion but also delightful.

Jason: Why did you decide to make this app? Did you stick with the same concept from the beginning or is what we see here what an original idea became?

Ryan: Originally I made an Objective-C port of a PHP library for text readability called RNTextStatistics. I created that project just for fun and because there was really nothing out there for Objective-C developers to drag and drop into their projects that offered this sort of analysis on text. It turned out to be a doozy of a project, but an incredible learning experience. The syllable counting naturally lead into thinking about meter and rhyme. That’s when I decided to create a poetry app.

Jason: What was the development of the app like? How long have you been working on it?

Ryan: The app actually didn’t take long to develop at all. Call a rhyming API, store some data locally, sync with a backend service, and it was done. I’d estimate just a month or two of tinkering at night and it was completed. For every side project I take on I decide to use a new technology I’m unfamiliar with. One of the big ones for Poetreat was Core Data syncing with AFIncrementalStore. This is a really amazing project. I ended up spending more time creating demo projects to show off how easy it is to sync Core Data with a web service using AFIncrementalStore.

Jason: Poetreat has a lovely and useable interface. What went into the making of that? Was it all custom or did you use open source components or a mixture?

Ryan: You know how I mentioned the development was pretty quick and easy? Well the design was quite the opposite. I’m a developer by education and talent, not a designer. However I really want to be self-reliant because I am sometimes overly critical on other’s work. It makes working in a developer-designer pair really difficult for me. I decided to design this app 100% by myself, but that required a lot of time spent on Dribbble researching what others had done. I created about 5 style guides for Poetreat before I finally settled on the live app.

That isn’t to say I didn’t have help. The designers at my day job helped tremendously with feedback. A couple times they even watched over my shoulder in Photoshop and gave me some tips and tricks.

Jason: The syllable counting feature is something I’ve never seen in an app before. How does that work? What other things do you do with natural language in the app?

Ryan: That all comes from my open source text parsing library RNTextStatistics. Dave Child was the original author of the PHP version. I learned a lot about readability scoring in the process though. Its even spurned my next app that is entirely about readability scoring and improving. The big tests in readability are:

Flesch Kincaid Reading Ease
Flesch Kincaid Grade Level
Gunning Fog Score
Coleman Liau Index
SMOG Index
Automated Reability Index

Jason: What are some of your favourite rhymes you’ve found with the help of Poetreat?

Ryan: You know how everyone always uses the word “orange” as the impossible rhyme? Poetreat can actually find some pretty good rhymes for “orange”, my favorite being “lozenge”. Its not exact, but its not a word I’d of ever come up with!

Jason: You’ve previously mentioned to me this is your 7th app for the App Store and you’re really looking to make this a winner. I’ve had my own App Store woes in the past as well. What have you learned from your experiences in the past and how are you using that to your advantage this time around?

Ryan: Well I’ve not learned, but been told that you have to spend money in marketing. I went with two app “marketing” websites:

  1. https://applaunch.us/
  2. http://pitchpigeon.com/dashboard

and filed an official press release. All in all I spent maybe $250, definitely not that much. However now that it’s been 2 weeks I can tell you I will never do that again. Total waste. No reviews, nothing in Google News except the official Press Release. I know that Poetreat isn’t going to set the world on fire, but I think it’s unique enough to warrant some talk. I’m sitting at about 3,500 downloads now, which is by no means a failure. But both of those services above have tracking and analytics for who reads and actually reviews your stuff. I’ve gotten 0 press, most of my downloads are purely by word of mouth and community. I emailed about 15 people on launch day (including yourself!) to take a look at it. Everyone responded with wonderful criticism. I was really happy. I could have only done that and been fine.

I also spent, in my opinion, way too long on design. I could have released this app in November with default UIKit design and it would behave exactly the same. I’m planning on going that route next time. Some UIAppearance and some CALayer animations, but I’m done spending hours in Photoshop.

Jason: In-App Purchase seems to be a popular route these days in the App Store. Why did you decide to offer the app free with IAP?

Ryan: Because its the trendy thing currently, and I wanted to see why. However I’m finding 0 difference in money earned between this and my paid apps. I’ve got about a 2.5% conversion, with 3,500 being about 87 sales, netting me roughly $61. Now I’ll admit that my IAP doesn’t unlock the most amazing features, I definitely went with a freemium model.

(Good News: This interview was conducted a little before WWDC 2013, but Ryan sent me some updated info about how Poetreat is fairing. He says:

Since I sent this email Poetreat got featured in the New & Noteworth, hit #27 in US, #1 US in the Lifestyle category.


Jason: Almost completely unrelated, but have you ever considered turning the syllable counting and rhyming analysis into a multiplayer/turn-based game? It seems like it could make for an interesting “Rhyme with Friends” kind of game.

Ryan: Abbbsolutely. This is on my palette as a possible followup. It’d be even better to go Loren’s route and use Game Center so I don’t have to muck with servers again. Something like “build a poem together”. However I’ve already tackled a poetry app, and its shown me that sales won’t be enough to motivate me if the idea isn’t exciting.

Go check out Poetreat in the App Store.

Calca Review and Interview

When I first heard of Frank Krueger’s new app Calca, a fantastic re-imagining of a calculator, mathematical environment, and a text editor, I was hooked. On the surface of its functionality, Calca resembles Soulver, another real-time “calculator” program for your computer. But when you really take a deeper look at Calca, you realize there’s much beneath the surface.

In addition to being a perfect example of how math should be done on a computer (as opposed to making a window with buttons labelled 0-9), Calca gives you instant feedback and instant evaluation. It evaluates as much as it can, given the information it has, but it’s thoughtful enough to be OK when there are unknowns, patiently displaying the variables in-place.

More than just a calculator, Calca allows for Markdown formatted text, with mathematics and their evaluations appearing precisely where they are designated.

I just had to know more about this app and the work that went into it, so I’m delighted to present my interview with its developer Frank Krueger. I hope you enjoy it as much as I did.


Jason: As you explain on your website, Calca grew out of a frustration. How did you come to the decision to solve that with Calca? Had you explored some other ideas first, or did Calca come about all at once?

Frank: In one important way, I have been thinking about Calca for a long time. I found that whenever I needed to do some algebra, perhaps convert from a screen coordinate to a 3D coordinate in one of my applications, I would do my “heavy thinking” in a plain text file. I would write out examples and then go line by line manipulating those examples, and any other equations I could think of along the way, until I had something that I could convert into code.

So I’ve always had these text file “derivations”. But it’s a terrible system: Sometimes I actually needed to do some arithmetic that I couldn’t do in my head. So I would switch to Mac’s Calculator app, or query Wolfram Alpha or use Soulver, and then switch back to my text file and plug the results in. Also, because Copy & Paste were my main tools for doing algebra, I was always suspicious of my own work.

So Calca, you could say, came about all at once as the realization that I should just write a smarter text editor to handle these files I was creating - one that knew arithmetic and algebra, had features from the programming IDEs I use every day, but still tried its best to stay out of your way (programming IDEs are often too rigid to explore ideas.)

The idea that Calca should update as you type was an assumption from the start. I certainly wasn’t going to hit Cmd+R whenever I typed something new!

Jason: What’s your background?

Frank: I have been writing software professionally for 15 years now. I was lucky enough to intern at General Motors in an electronics R&D group writing embedded systems software while I was young. That job taught me a lot about engineering in general, but also about computers. I was writing embedded code in assembler and C and diagnostic tools in C++ before college. On the side, I was also active in groups writing level and content editors for video games.

In college I earned a Master’s degree in Electrical Engineering specializing in control systems (at RIT in NY). I mention this because it was the time when I became most comfortable with mathematics. Modern engineering degrees are sometimes hard to differentiate from applied math degrees!

From there, I moved to Seattle to work at Microsoft. That was a wonderful experience learning how big companies write software, but wasn’t fulfilling enough. So I packed up, went to India and started a company with an old friend building control systems for naval ships. That is the same company I operate today, Krueger Systems. But we ended up “pivoting” (in today’s parlance) to web development consulting and I spent some time doing that.

The introduction of the iPhone freed me from the terrible world of web development (this was before JavaScript’s ascendance, web dev is better now). Starting in the fall of 2008, I wrote a huge assortment of apps, some of which even shipped. I was basically in love with the iPhone and was just doing my best to create interesting software for it. I didn’t make much money from these apps, but I’m still proud of a bunch of them.

The introduction of the iPad changed everything for me. I started writing bigger and more interesting apps, and they started selling better. iCircuit is my flagship app and my obsession since July 2010 - another app that I wrote as a gift to my 1999 self. It’s a circuit simulator that is constantly showing you results - it has been quite popular with students and hobbyists.

So I’ve been writing iOS apps for nearly 5 years and have been making a living from them for 3.

Jason: What roll do you see Calca filling? To me it seems kind of like a melding of a word processor, a spreadsheet, and a calculator all in one. As the creator, how do you see it?

Frank: For me, Calca enables:

  1. Quick and dirty calculations

    No matter how many literate constructs I put in Calca, it is, at it’s heart, a calculator. I mean for it to replace the Calculator.app. It only takes 1 more keystroke in Calca to accomplish what you can do in Calculator, but once you’re in Calca you have an arsenal of math power at your finger tips.

    I’ll admit that I still use Spotlight for my simple two-number arithmetic problems. But the moment I have more than two numbers, or two operations, I switch to Calca.

  2. Development and Exploration of complex calculations

    Calca makes playing with math - no matter how complex - easy. Even after I used it to verify or derive some equation, I often sit in the tool just playing with inputs to see how the equations change. Perhaps I’m strange, but it’s fun to be in a tool that encourages exploration and experimentation.

  3. Thought out descriptions and explanations of research or study

    I remember writing lab reports in college and struggling with Word’s equation editor. They’ve improved it, but I have never gotten over just how bothersome the tool was for creating original research. Even after you finished your battle with the WYSIWYG equation editor, you were just left with a pretty picture. Even if you move on to TeX and solve the input problem, you’re still stuck with pretty pictures that do nothing. There was no way to test and prod your equations, no way to give examples of their use. They might as well have been printed by Gutenberg. And so Calca is a step in the direction of a research paper writing tool that actually tries to help you.

Jason: How does Calca compare to Soulver?

Frank: Compare and contrast or just compare? ;-)

Soulver and Calca have the same goals - to make computation easier and faster than using traditional calculators.

Calca takes ideas from Soulver and super sizes them:

  • Soulver allows free text editing within a line of computation; Calca allows free editing in the entire file
  • Soulver updates answers as you type; Calca updates answers and derivations
  • Soulver understands arithmetic; Calca knows algebra, calculus, and algorithms

But Calca also addresses Soulver’s deficiencies:

  • Soulver requires that all variables be defined; Calca does not
  • Soulver does not support user defined functions; to Calca, everything is a function
  • Soulver has no programming constructs; Calca is one step away from being a general purpose programming language.
  • Soulver does not support engineering math: matrices, derivatives, etc.

Anyway, I don’t want to go on since I do love Soulver. It was a first step to getting away from Excel and old-fashioned calculators - Calca is an attempt to build upon that progress.

Jason: When I first saw Calca, as a live-editing mathematical environment, I was reminded of Bret Victor’s Kill Math project (specifically his “Scrubbing Calculator”). Do you think Calca provides a better interface for doing Math than the mostly “pen-and-paper” derived way we do it today?

Frank: Yes, I do believe it’s superior to real pen and paper since eventually you will want the aid of a machine to do some math for you.

But I see tablet-based “pen and paper” as a perfectly valid input mechanism. In fact, the earliest versions of Calca, from years ago, used hand writing recognition for input. I was trying to avoid the keyboard all-together.

What I found was that there are some hard problems in recognition to accomplish (1) fast unambiguous input, (2) recognition of dependencies (order), and (3) high information density on the screen. It’s easy to create fun tech demos but very hard to create reliable apps that won’t just frustrate you.

So, for now, the keyboard is a superior input device - especially on the desktop. But I am completely open to more natural mechanisms.

Jason: I should clarify, when I say “pen and paper” I don’t just mean writing in the sense of hand-written math, what I should have said was “traditional symbolic” math. Whether written on paper or typed, something like “10 = 2x + 2” is represented the same way.

Do you think the traditional symbolic notation is sufficient for math, or do you think it can be improved with the benefits afforded by a computer (near-instant computation, simulation, limitless memory, graphical capabilities, etc.)? Symbolic notation is convenient when all you’ve got is paper, but do you have any thoughts on a “better interface” for math?

Frank: Oh, ha! Yes, I completely mis-understood.

That’s quite the philosophical question and I hope you’ll forgive that what I say is stream of conscience. While I have given the question a lot of thought in the field of robotics and engineering, I have not really considered how new tech like this influences mathematics.

So let’s start with YES. Simulation, near-instant Simulation (like iCircuit), Visualization, yes these are often superior representations of mathematics/physical systems. I’m reminded of Bret Victor questioning why we jagged little lines to represent resistors when computers give us the ability to represent the resistor more richly as an as a graphical IV curve (current vs. voltage). I am on board with this.

MATLAB’s Simulink is an early example of trying to improve design work using visualizations. Now their semantics are old and not very powerful, and their visualizations are not real time, but it gets the job done - it’s an improvement over writing procedural code that must interact with real complex systems.

I have high hopes that these modern tools can progress to allow arithmetic of their visualizations for these myriads of applications. Imagine Bret’s resistors in series so that their curves combine into one resistor. Cool.

My only concern with these computation intensive visual tools is how well they will handle algebra - the manipulation and combination of all the symbols that are used to create this data. When I design an amplifier using an operational amplifier, I know that the gain is:

G = 1 + Rf/Rg

Now let’s say I’m in Bret’s tool that has curves instead of resistors. While he makes it easier to play with the values of Rf and Rg and see the effects of this, he doesn’t make it easier to visualize the gain equation, this algebra. I can play with the values all day long and never meet my design goal. Or worse, I stumble upon one of the solutions (there’s an infinite number) and assume that it’s the only solution.

Just like with a traditional circuit schematic, the designer can’t do his job reliably unless they know the rule: G = 1 + Rf/Rg. Naive visualizations and computations don’t instruct they only re-inforce what you already know. (Unless of course you stare at the visualizations long enough and self-instruct yourself to understand the equation.)

Now let’s say I’m an EE student from 50 years ago with no simulation software and no visualizations. Some how, I still need to learn enough to get us to the moon. That little equation, G = 1 + Rf/Rg is all you need to design the correct amplifier - no computers of software necessary. No only that, but they could probably derive the equation using just three others: v = i * R, Vout = A*(Vp - Vm), A = Infinity. There is power in being able to manipulate symbols.

So what I’m getting at is this: I have high hopes for these modern visualizers. I just hope that they find ways to increase our understanding and not decrease it (by not exposing governing equations). If they don’t increase our understanding, then we might as well stick with Calca. :-)

I hope you can make some sense of that.

Jason: You said you developed an LALR(1) parser generator for Calca. What is your background on writing parsers? How did you go about creating the parser?

Frank: I didn’t write the generator, just the grammar. The generator is jayc, which is a port of jay which is an implementation of yacc’s parsing algorithm. :-)

I wrote my first parsers back as that young intern at General Motors. When you work with embedded systems, you really learn what a compiler does. I spent a lot of time comparing my C code to the machine code that the compiler would generate. This ended up being a fantastic way to internalize, or grok, how computers work. Understanding compilers was then just a matter of writing some code to convert between the two.

I remember learning to write my first parser using Stroustrup’s The C++ Programming Language book. He has a chapter in there where he builds a little calculator utility. It’s still one of the best introductions I know of to recursive-decent parsing. It was mind opening as a young programmer. From there, I gorged myself on everything Niklaus Wirth had to say on the subject (and every other subject), and wrote scheme interpreters (following SICP) over and over again until I understood language design and implementation tradeoffs.

Calca’s parser is basically a yacc file from 1970. You start with IDENTIFIER and NUMBER tokens, and slowly build your way up to recognizing your full language. It’s actually quite fun. If you like using regular expressions in your programs, you have no idea what you’re missing by not using full blown parsers.

Jason: What technologies did you use for making this project? Did you build upon open source or did you roll it all your own? How did you build the project over the course of three months (did you have many beta testers, how did the app evolve, etc.)?

Frank: The app is 100% my code written in C#. I use Xamarin tools as my IDE and to compile the app for iOS/Mac/whatever else I’m in the mood for. C# provides a lot of benefits over older languages especially when it comes to writing interpreters and you’re bracingly re-writing expression trees.

I did look to license a CAS (Computer Algebra Library) but couldn’t find a good match for what I wanted and that had workable license terms.

At first, the app didn’t have big ambitions on the mathematics side so it seemed perfectly reasonable to write my own parser and interpreter. I had done that many times before so there wasn’t too much risk. While Calca’s engine is only now getting to be on par with these more mature libraries, and I apologize to users who actually need all the power of Mathematica, having a language and execution model that is specifically designed to be humane has benefited all of us.

There was a core of 3 beta testers working over the entire development time of the app. I know, not a lot of people, but we didn’t know that any one else would like it! :-) To help make up for the lack of testers, a pretty extensive set of automated tests have been developed. Last I checked, I run about 3,000 unit tests whenever something changes to make sure that the engine is consistent.

That said, having 1,000 people use your app is quite different than 3 so I’ve been fixing a fair share of v1.0 bugs. :-)

Jason: Most of my peers (self-included) work in a lot of Objective C. What differences have you found between it and C#? How do you find iOS development with a non-Xcode environment?

Frank: Ah, I didn’t know, I wouldn’t have been so glib.

There are two big difference between C# and Obj-C is the garbage collector and the more succinct syntax. As for the GC, JWZ said it best, “I have to admit that, after that, all else is gravy”. Having one around instead of manual memory management, or reference counting, or even some automatic reference counting frees you to focus on computation instead of data management. There is a correlation between the popularity of scripting languages and the fact that they have a GC.

On top of that C# just has a lot of syntactic convenience. It’s a two-pass compiler so you don’t have to repeat declarations. There is a unified safe type system (like Obj-C’s id, except that it applies to everything, including C code). It has modern language features: generator functions, list comprehensions, closures that can capture variables, co-routines that can be written using procedural syntax, and on and on. There is also a giant class library and a huge 3rd party ecosystem. The real benefit is that it can still access C code and functions natively so I can use the entire SDK. So mostly pros, and just a few cons.

My first two years of iOS development were in Xcode, but it was a younger version so it’s hard for me to compare the two IDEs. Generally speaking, Xamarin’s tools have superior code completion and debugging visualization. But both IDEs are mature powerful products, and I honestly like them both.

I have not had any real problems being “non-Xcode”. There was that silly scare years ago when Apple was trying to block non Obj-C apps, but that was an obvious blunder on their side so I never concerned myself with it.

Jason: Where do you see Calca going forward?

Frank: Well, I’m submitting v1.1 to Apple today (for both platforms) that fix a lot of the v1.0 bugs (Note: v1.1 has since been released to the App Store).

After that, there is a lot of low-hanging-fruit that I cut for v1.0:

  1. Unit conversion
  2. Plotting
  3. Solving differential equations
  4. Larger library of utility functions

These are all features I want myself, so they’re guaranteed to be added. I’m also actively listening to feedback. This v1.1 release is essentially all fixes from bugs submitted by users.

After all that, we as a Calca community have to see how to proceed. As I use it as a tool every day in my own work, the app will be maintained as long as I’m still working. But we as a community will have to decide what additional features would make it more powerful, or if it, itself, should be eventually supplanted for something better. Always advance the state of the art!

Jason: It’s funny you should mention plotting. I’ve actually been working on a math-related program in my spare time lately, specifically to do with graphical representations of formulas, so when I found Calca, it caught my eye. Do you think plotting is an essential function missing (most of) today’s “calculators”? Will you be approaching some of Mathematica’s functionality?

Frank: Plotting is a very important feature.

I have spent a lot of time in Grapher.app playing with different blending functions:

y = x
y = x^2
y = sqrt(x)

I would just look at the graphs and decide which one met my needs. Now I could do this with a calculator by asking for its derivatives at the points x = [0, 1] but that’s tedious and silly. It’s much easier to see a plot and just pick the one that seems right. I want that feature in Calca.

I plan on supporting plotting of one parameter functions like: g(x) = sin(x) + 1/x^5. These will be displayed as graph’s similar to Grapher’s.

I also will try to add two-parameter functions: h(x, y) = if x > 0.5 && y > 0.5 then 1 else 0. These will be displayed as images.

I’m not sure how far to go from there - I will simply listen to feedback and see what more features people need.

Jason: Finally, any chance of a Mac version?

Frank: It’s available now! Calca for Mac OS X

I’d like to thank Frank once again for taking the time to be interviewed. Go buy Calca.

Subscribing to Speed of Light

This site got a bit of unexpected traffic today, which was wonderful. If you’re looking to subscribe via RSS/Atom, check out the Subscribe page below.

I’m currently working on an Auto-tweet feature, and you can follow @ospeedoflight for updates there. It might be a little bumpy over the weekend as I iron out the bugs, but it should be a good way to stay up to date with all the things I publish here. There’s also a problem of some older articles missing, and there’s also a missing link to my archives. Those should be resolved this weekend, too.

Thank you so much for reading “Speed of Light”!

Your App Makes Me Fat. Kathy Sierra on the moral responsibilities of psychological effects of software and marketing:

In 1999, Professor Baba Shiv (currently at Stanford) and his co-author Alex Fedorikhin did a simple experiment on 165 grad students.They asked half to memorize a seven-digit number and the other half to memorize a two-digit number. After completing the memorization task, participants were told the experiment was over, and then offered a snack choice of either chocolate cake or a fruit bowl.

The participants who memorized the seven-digit number were nearly 50% more likely than the other group to choose cake over fruit.

Researchers were astonished by a pile of experiments that led to one bizarre conclusion:

Willpower and cognitive processing draw from the same pool of resources.


My father died unexpectedly last week, and as happens when one close to us dies, I had the “on their deathbed, nobody thinks…” moment. Over the past 20 years of my work, I’ve created interactive marketing games, gamified sites (before it was called that), and dozens of other projects carefully, artfully, scientifically designed to slurp (gulp) cognitive resources for… very little that was “worth it”. Did people willingly choose to engage with them? Of course. And by “of course” I mean, not really, no. Not according to psychology, neuroscience, and behavioral economics research of the past 50 years. They were nudged/seduced/tricked. And I was pretty good at it. I am so very, very sorry.

My goal for Serious Pony is to help all of us take better care of our users. Not just while they are interacting with our app, site, product, but after. Not just because they are our users, but because they are people.

Because on their deathbed, our users won’t be thinking,“If only I’d spent more time engaging with brands.”

So much good stuff in here, I think you should read the whole thing.

Bret Victor’s “The Future of Programming”. References and follow up to his satirical and theatrical talk at Dropbox’s conference.

My favourite part:

“The most dangerous thought you can have a creative person is to think you know what you’re doing.”

It’s possible to misinterpret what I’m saying here. When I talk about not knowing what you’re doing, I’m arguing against “expertise”, a feeling of mastery that traps you in a particular way of thinking.

But I want to be clear – I am not advocating ignorance. Instead, I’m suggesting a kind of informed skepticism, a kind of humility.

Ignorance is remaining willfully unaware of the existing base of knowledge in a field, proudly jumping in and stumbling around. This approach is fashionable in certain hacker/maker circles today, and it’s poison.

Knowledge is essential. Past ideas are essential. Knowledge and ideas that have coalesced into theory is one of the most beautiful creations of the human race. Without Maxwell’s equations, you can spend a lifetime fiddling with radio waves and never invent radar. Without dynamic programming, you can code for days and not even build a sudoku solver.

It’s good to learn how to do something. It’s better to learn many ways of doing something. But it’s best to learn all these ways as suggestions or hints. Not truth.

Learn tools, and use tools, but don’t accept tools. Always distrust them; always be alert for alternative ways of thinking. This is what I mean by avoiding the conviction that you “know what you’re doing”.

Book Burning Advocate Ray Bradbury on Creativity and Computers. Awesome cantankerous bastard:

You don’t miss what you’ve never had. People talk about sex when you’re 12 years old and you don’t know what they’re talking about - I don’t know what people are talking about when they talk about driving. I grew up with roller skates, a bicycle, using the trolley and bus lines until they went out of existence. No, you don’t miss things. Put me in a room with a pad and a pencil and set me up against a hundred people with a hundred computers - I’ll outcreate every goddamn sonofabitch in the room.

Goldie Blox: A Building Toy Tailored for Girls. GoldieBlox, Inc. is a toy company founded in 2012 by Debbie Sterling, a female engineer from Stanford University. Engineers are solving some of the biggest challenges our society faces. They are critical to the world economy, earn higher salaries and have greater job security. And they are 89% male. We believe engineers can’t responsibly build our world’s future without the female perspective.

GoldieBlox offers a much-needed female engineer role model who is smart, curious and accessible. She has the potential to get girls interested in engineering, develop their spatial skills and build self-confidence in their problem solving abilities. This means that GoldieBlox will nurture a generation of girls who are more confident, courageous and tech-savvy, giving them a real opportunity to contribute to the progress made by engineers in our society.

This is a terrific idea. Check out the video too.

The Original WorldWideWeb Browser-Editor. World Wide Web creator Tim Berners-Lee on his original Web Browser:

The first web browser - or browser-editor rather - was called WorldWideWeb as, after all, when it was written in 1990 it was the only way to see the web. Much later it was renamed Nexus in order to save confusion between the program and the abstract information space (which is now spelled World Wide Web with spaces).

I wrote the program using a NeXT computer. This had the advantage that there were some great tools available -it was a great computing environment in general. In fact, I could do in a couple of months what would take more like a year on other platforms, because on the NeXT, a lot of it was done for me already.

The Web was originally built to not only be browsed graphically, but also edited graphically. HTML was not intended to be edited directly.

Otherlab on Independent Research. Saul at Otherlab talking about independent research:

It is not clear to me that the post WW2 model of national research, largely done in National Labs, Universities, and a tiny amount in industry is the future. In fact before WW2 a large portion of research was done in independent and industrial research labs. I know from the experience of our lab, that we are faster, cheaper, and at least as rigorous, and probably more creative, than good federal or academic research centers.

Great Pacific Garbage Patch. Disgusting:

The Great Pacific Garbage Patch, also described as the Pacific Trash Vortex, is a gyre of marine debris in the central North Pacific Ocean located roughly between 135°W to 155°W and 35°N and 42°N. The patch extends over an indeterminate area, with estimates ranging very widely depending on the degree of plastic concentration used to define the affected area.

The patch is characterized by exceptionally high concentrations of pelagic plastics, chemical sludge and other debris that have been trapped by the currents of the North Pacific Gyre. Despite its size and density, the patch is not visible from satellite photography, since it consists primarily of suspended particulates in the upper water column. Since plastics break down to even smaller polymers, concentrations of submerged particles are not visible from space, nor do they appear as a continuous debris field. Instead, the patch is defined as an area in which the mass of plastic debris in the upper water column is significantly higher than average.

Everything You See Is Future Trash. The article on Garbage I linked to earlier reminded me of this 2009 interview with Robin Nagle of the Department of Sanitation of New York. It’s a thought-provoking interview about our treatment of garbage, how we ignore it, and how we stigmatize it. I think bringing these things to the surface might be wise.

Garbage is generally overlooked because we create so much of it so casually and so constantly that it’s a little bit like paying attention to, I don’t know, to your spit, or something else you just don’t think about. You—we—get to take it for granted that, yeah, we’re going to create it, and, yeah, somebody’s going to take care of it, take it away. It’s also very intimate. There’s very little we do in twenty-four hours except sleeping, and not always even sleeping, when we don’t create some form of trash. Even just now, waiting for you, I pulled out a Kleenex and I blew my nose and I threw it out, in not even fifteen seconds. There’s a little intimate gesture that I don’t think about, you don’t think about, and yet there’s a remnant, there’s a piece of debris, there’s a trace.[…]

Well, it’s cognitive in that exact way: that it is quite highly visible, and constant, and invisibilized. So from the perspective of an anthropologist, or a psychologist, or someone trying to understand humanness: What is that thing? What is that mental process where we invisibilize something that’s present all the time?

The other cognitive problem is: Why have we developed, or, rather, why have we found ourselves implicated in a system that not only generates so much trash, but relies upon the accelerating production of waste for its own perpetuation? Why is that OK?

And a third cognitive problem is: Every single thing you see is future trash. Everything. So we are surrounded by ephemera, but we can’t acknowledge that, because it’s kind of scary, because I think ultimately it points to our own temporariness, to thoughts that we’re all going to die.[…]

It’s an avoidance of addressing mortality, ephemerality, the deeper cost of the way we live. We generate as much trash as we do in part because we move at a speed that requires it. I don’t have time to take care of the stuff that surrounds me every day that is disposable, like coffee cups and diapers and tea bags and things that if I slowed down and paid attention to and shepherded, husbanded, nurtured, would last a lot longer. I wouldn’t have to replace them as often as I do. But who has time for that? We keep it cognitively and physically on the edges as much as we possibly can, and when we look at it head-on, it betrays the illusion that everything is clean and fine and humming along without any kind of hidden cost. And that’s just not true.


That sort of embarrassment is directed at people on the job every day on the street, driving the truck and picking up the trash.

People assume they have low IQs; people assume they’re fake mafiosi, wannabe gangsters; people assume they’re disrespectable. Unlike, say, a cop or a firefighter. And I do believe very strongly it’s the most important uniformed force on the street, because New York City couldn’t be what we are if sanitation wasn’t out there every day doing the job pretty well.

And the health problems that sanitation’s solved by being out there are very, very real, and we get to forget about them. We don’t live with dysentery and yellow fever and scarlet fever and smallpox and cholera, those horrific diseases that came through in waves. People were out of their minds with terror when these things came through. And one of the ways that the problem was solved—there were several—but one of the most important was to clean the streets. Instances of communicable and preventable diseases dropped precipitously once the streets were cleaned. Childhood diseases that didn’t need to kill children, but did. New York had the highest infant mortality rates in the world for a long time in the middle of the nineteenth century. Those rates dropped. Life expectancy rose. When we cleaned the streets! It seems so simple, but it was never well done until the 1890s, when there was this very dramatic transformation.

You should just read the interview.

Autoworkers of Our Generation. Greg Baugues:

Fifty years ago, an autoworker could provide a middle-class existence for his family. Bought a house. Put kids through college. Wife stayed home. He didn’t even need a degree.

That shit’s over. Detroit just went bankrupt.

No one’s got it better than developers right now. When the most frequent complaint you hear is “I wish recruiters would stop spamming me with six-figure job offers,” life’s gotten pretty good.[…]

No profession stays on top forever… just ask your recently graduated lawyer friends.

Although the autoworkers analogy works, I think there’s a better one for current software developers: We’re like those who were capable of writing long before that ability was shared with the masses.

We have forms and means to express ourselves which are superior (in their own ways) to static writings. For instance, I can write an essay and you can read exactly the thoughts I decided you should read. But I can write a piece of software to also express those points — and other arguments as well — and you the “reader” get to explore my thoughts and in a sense, ask my “thoughts” questions. This is a superior trait over the plain written word.

Since we software developers can express thoughts in ways people who can “only” read and write cannot, we are quite like the privileged folk of centuries past, who could express thoughts in written word which exceeded what could be spoken.

The question is, should we milk it for what it’s worth or should we embrace it as a moral responsibility to give everybody this form of expression?

Jeffrey Bezos, Washington Post’s next owner, aims for a new ‘golden era’ at the newspaper. Paul Farhi and Craig Timberg for the Washington Post:

But Bezos suggested that the current model for newspapers in the Internet era is deeply flawed: “The Post is famous for its investigative journalism,” he said. “It pours energy and investment and sweat and dollars into uncovering important stories. And then a bunch of Web sites summarize that [work] in about four minutes and readers can access that news for free. One question is, how do you make a living in that kind of environment? If you can’t, it’s difficult to put the right resources behind it. . . . Even behind a paywall [digital subscription], Web sites can summarize your work and make it available for free. From a reader point of view, the reader has to ask, ‘Why should I pay you for all that journalistic effort when I can get it for free’ from another site?”

Why indeed.

Whatever the mission, he said, The Post will have “readers at its centerpiece. I’m skeptical of any mission that has advertisers at its centerpiece. Whatever the mission is, it has news at its heart.”

There you have it. All the major newspaper companies are shrinking, but now the Washington Post has outside investment, allowing it to experiment with new models and for discovering its future.

If the Web is eating your business from the low end, and your competitor has newfound deep pockets, where does that leave your business?

NSA Foils Much Internet Encryption. Nicole Perlroth for my employer, the New York Times:

Many users assume — or have been assured by Internet companies — that their data is safe from prying eyes, including those of the government, and the N.S.A. wants to keep it that way. The agency treats its recent successes in deciphering protected information as among its most closely guarded secrets, restricted to those cleared for a highly classified program code-named Bullrun, according to the documents, provided by Edward J. Snowden, the former N.S.A. contractor.

Beginning in 2000, as encryption tools were gradually blanketing the Web, the N.S.A. invested billions of dollars in a clandestine campaign to preserve its ability to eavesdrop. Having lost a public battle in the 1990s to insert its own “back door” in all encryption, it set out to accomplish the same goal by stealth.

Don’t worry about it though. What can be done, right?

iPads for students halted after devices hacked. Howard Blume:

Soon they were sending tweets, socializing on Facebook and streaming music through Pandora, they said.

L.A. Unified School District Police Chief Steven Zipperman suggested, in a confidential memo to senior staff obtained by The Times, that the district might want to delay distribution of the devices.

“I’m guessing this is just a sample of what will likely occur on other campuses once this hits Twitter, YouTube or other social media sites explaining to our students how to breach or compromise the security of these devices,” Zipperman wrote. “I want to prevent a ‘runaway train’ scenario when we may have the ability to put a hold on the roll-out.”

How dare kids enjoy technology. They’re supposed to be learning, not enjoying!

Some Weekend Inspiration

I was moved by this bit from John Markoff’s “What the Dormouse Said”, a tale of 1960’s counter culture and how it helped create the personal computer:

Getting engaged precipitated a deep crisis for Doug Engelbart. The day he proposed, he was driving to work, feeling excited, when it suddenly struck him that he really had no idea what he was going to do with the rest of his life. He stopped the car and pulled over and thought for a while.

He was dumbstruck to realize that there was nothing that he was working on that was even vaguely exciting. He liked his colleagues, and Ames was in general a good place to work, but nothing there captured his spirit.

It was December 1950, and he was twenty-five years old. By the time he arrived at work, he realized that he was on the verge of accomplishing everything that he had set out to accomplish in his life, and it embarrassed him. “My God, this is ridiculous, no goals,” he said to himself.

That night, when he went home, he began thinking systematically about finding an idea that would enable him to make a significant contribution in the world. He considered general approaches, from medicine to studying sociology or economics, but nothing resonated. Then, within an hour, he was struck in a series of connected flashes of insight by a vision of how people could cope with the challenges of complexity and urgency that faced all human endeavors. He decided that if he could create something to improve the human capability to deal with those challenges, he would have accomplished something fundamental.

In a single stroke, Engelbart experienced a complete vision of the information age. He saw himself sitting in front of a large computer screen full of different symbols. (Later, it occurred to him that the idea of the screen probably came into his mind as a result of his experience with the radar consoles he had worked on in the navy.) He would create a workstation for organizing all of the information and communications needed for any given project. In his mind, he saw streams of characters moving on the display. Although nothing of the sort existed, it seemed the engineering should be easy to do and that the machine could be harnessed with levers, knobs, or switches. It was nothing less than Vannevar Bush’s Memex, translated into the world of electronic computing.

This bit resonated with me for several reasons, one of which will become clear in the coming weeks. But the really important thing isn’t just that Engelbart recognized a disastifaction with his life and how to fix it. It’s not that he had a stroke of vision to invent so much of what modern personal computers would (mostly incorrectly) base off. What’s really important is that he then went on to see his vision through.

Remember, this epiphany happened to him in 1950, and his groundbreaking “Mother of all Demos” presentation wasn’t until 1968. It might have seemed like something so grand had to come all at once (especially considering how long ago it was), but it took nearly two decades to be reached.

Interview Between Doug Engelbart & Belinda Barnet, 10th Nov, 1999. Engelbart:

[…] So for instance, in our environment, we would never have thought of having a separate browser and editor. Just everyone would have laughed, because whenever you’re working on trying to edit and develop concepts you want to be moving around very flexibly. So the very first thing is get those integrated.

Then [in NLS] we had it that every object in the document was intrinsically addressable, right from the word go. It didn’t matter what date a document’s development was, you could give somebody a link right into anything, so you could actually have things that point right to a character or a word or something. All that addressibility in the links could also be used to pick the objects you’re going to operate on when you’re editing. So that just flowed. With the multiple windows we had from 1970, you could start editing or copying between files that weren’t even on your windows.

Also we believed in multiple classes of user interface. You need to think about how big a set of functional capabilities you want to provide for a given user. And then what kind of interface do you want the user to see? Well, since the Macintosh everyone has been so conditioned to that kind of WIMP environment, and I rejected that, way back in the late 60s. Menus and things take so long to go execute, and besides our vocabulary grew and grew.

And the command-recognition [in the Augment system]. As soon as you type a few characters it recognises, it only takes one or two characters for each term in there and it knows that’s what’s happening.

Fear Of Wasting My Life Online Looking At Pictures

FOWMLOLAP, for short.

You see, there’s this thing called FOMO: The Fear of Missing Out (which I’ve previously talked about on this site). It’s a phenomenon about the feeling you get when you see your peer’s activity online, particularly on social networks, and it gives you that empty feeling because you see all the things you’re not doing.

I like to think I am, to a degree at least, somewhat immune to the FOMO. I’m not totally unaffected by it, but I feel like I’m antisocial enough so that at least it doesn’t bother me too much to see others’ activities online.

What does bother me, more and more, is the fear that I’m wasting my life looking at pictures online. When I get to the heart of things, so much of my non-working life online is spent looking at pictures. There’s Instagram and Flickr and Tumblrs. There’s the stuttery sites like FFFFound (design porn) and Dribbble (design masturbation). Then there’s Twitter, which has its own share of photos or links to click (most of the links have lots of photos). There’s Macrumors and the Verge, and there’s my RSS reader, too. Although some of those sources have “news”, there’s almost always too much for me to read in a day, so I skip most of it. Back to the pictures.

These are my sites. When I’ve finished with them, I’ll start channel-changing back with the first ones all over again.

I’m not saying these websites are all bad or even any bad. I’m not saying there aren’t good aspects to them. I’m not saying everyone who uses them is wasting their lives.

I am saying, however, this is what I see myself doing. I have no fear of missing out because so often, WMLOLAP seems to be exactly what I want to do. And that’s why it gives me the Fear, because in reality, I so super very much do not want to do that.

This is my channel-changing-challenge.

More Thoughts. Relatedly, me, about a year ago:

I’ve been ruminating over this in my head for a while now. Why do I have a compulsion and anxiety to read it all? Why do I have to know? Why does it feel so important when I know it isn’t?

Dismantling “iOS Developer Discomfort”. Ash Furrow, on some “new” developer techniques:

When I first saw some of the approaches, which I’ll outline below, I was uncomfortable. Things didn’t feel natural. The abstractions that I was so used to working in were useless to me in the new world.

Smart people like these don’t often propose new solutions to solved problems just because it’s fun. They propose new solutions when they have better solutions. Let’s take a look.

Hold on to your butts, here comes a good ol’ fashioned cross-examination.

On Declarative Programming,

Our first example is declarative programming. I’ve noticed that some experienced developers tend to shy away from mutating instance state and instead rely on immutable objects.

Declarative programming abstracts how computers achieve goals and instead focuses on what needs to be done, instead. It’s the difference between using a for loop and an enumerate function.

On the surface, I agree with this. But for different reasons. The primary reason why this is important, and why functional programming languages have similar or better advantages here is because they eliminate state. How you do that, whether by being declarative or by functional is in some ways irrelevant. The problem is, our current programming languages do a terrible job of representing logic and instead leakily abstract the computer (hey, objects are blocks of memory that seem an awful lot like partitions of RAM…), thus the state of an application becomes a hazard.

But there are also times when eliminating state isn’t an option, and in those cases Declarative languages fall short, too. State is sometimes requisite when dealing with systems, and in that case state should be shown. It’s a failure of the development environment to have hidden state. As Bret Victor says in his Learnable Programming, programming environments must either show or eliminate state. A language or environment which does neither is not going to make programming significantly better, and therefore will remain in obscurity.

Objective-C is an abomination (I love it anyway).

I agree. It’s an outdated aberration. We need something that’s much better. Not just a sugar-coating like Go was to C++ (this was completely intentional, mind you, but if we’re going to get a new programming language, it damn well better be leaps and bounds ahead of what we’ve got now).

It’s a message-oriented language masquerading as an object-oriented language build on top of C, an imperative language.

Actually, originally, the concepts are supposed to be inseparable. Alan Kay, who coined the term “Object Oriented Programming” used it to describe a system composed of “smaller computers” whose main strength was its components communicating through messages. Classes and Objects just sort of arose from those. Messages are tremendously misunderstood concept among Object Oriented Programmers. I’d highly suggest everyone do their reading.

It was hard to get into declarative programming because it stripped away everything I was used to and left me with tools I was unfamiliar with. My coding efficiency plummeted in the short term. It seemed like prematurely optimizing my code. It was uncomfortable.

I don’t think it makes me uncomfortable because it’s unfamiliar, but because things like Reactive Cocoa, grafted on to Objective C as they are, create completely bastardized codebases. They fight the tools and conventions every Cocoa developer knows, and naturally have a hard time existing in an otherwise stateful environment.

It’s inherent in what Reactive Cocoa is trying to accomplish, and would be inherent in anyone trying to graft on a new programming concept to the world of Cocoa. What we need is not a framework poised atop a rocky foundation, but a new foundation altogether. Reactive Cocoa tries to solve the problem of unmanageable code in entirely the wrong way. It’s the equivalent of saying “there are too many classes in this file, we should create a better search tool!” (relatedly, I think working with textual source code files in the first place severely constrains software development. But more on that in some future work I’ll publish soon).

Dot-syntax is a message-passing syntax in Objective-C that turns obj.property into [obj property] under the hood. The compiler doesn’t care either way, but people get in flame wars over the difference.

Dot-syntax isn’t involved with message passing, just involved with calling messages. It’s a subtle but important difference.

In the middle of the spectrum, which I like, is the use of dot-syntax for idempotent values. That means things like UIApplication.sharedApplication or array.count but not array.removeLastObject.

The logic is noble but I think still flawed. I think methods should be treated like methods and properties like properties because semantically they represent two different aspects of objects, because dot-syntax was designed specifically for properties, because Apple Developers advise against it and because it just makes them harder to search for. Not only that, but dots present early binding of methods to objects, which goes against the late-binding principles of the original Objective C design.

It’s also hard because Xcode’s autocomplete will not help you with methods like UIColor.clearColor when using the dot-syntax. Boo.

This is almost always a sign!

[Autolayout] promised to be a magical land where programming layouts was easy and springs and struts (auto resizing masks) were a thing of the past. The reality was that it introduced problems of its own, and Autolayout’s interface was non-intuitive for many developers.

This is another place where there’s an obvious flaw in the way things work in our development environment. Constraint-based systems are notoriously difficult because they normally require all variables to be solved simultaneously, leaving no room for flexibility. This flies in the face of what a computer program writer is used to, and thus is hard to rectify. When combined with an interface which invisibly presents (i.e., does not present) these constraints, developers are left with nothing short of a clusterfuck.

I’ve wanted to believe, and I’ve abandoned Autolayout every year since its introduction because of this. While it does improve year over year, I believe there are fundamental problems it won’t be able to overcome while sticking with the same paradigms we’ve got today.

Springs and struts are familiar, while Autolayout is new and uncomfortable. Doing simple things like vertically centring a view within its superview is difficult in Autolayout because it abstracts so much away.

It’s not because Autolayout is new and unfamiliar but because it adds more, but hidden elements to a layout. It doesn’t abstract too much away, but it makes it impossible to deal with what is presented.

And finally,

Change is hard. Maybe the iOS community will resist declarative programming, as has the web development community for two decades.

Except for the Hypertext Markup Language and Cascading Style Sheets (languages which generate these aren’t solely web development languages any more than Objective C is), of course. HTML is perhaps the most successful declarative system we’ve ever invented.

But actually finally,

We’re in a golden age of tools

I hope at this point it’s clear I disagree with this. Some may say at least today we’ve got the best tools we’ve ever had, but to them I’d suggest looking at any of the tools developed by Xerox PARC in the 70s, 80s, or 90s. And as far as today’s or yesterday’s tools go, I think we’re far from a land of opulence. But there is hope. Today we have at our disposal exceptionally fast, interconnected machines, far outpacing anything previously available. We have networks dedicated to educating about all kinds of toolmaking, from programming to information graphics design, to language design.

We’re in a golden age of opportunity, we just need to take a chance on it.

Stock and Flow. Related to FOWMLOLAP, Robin Sloan:

But I’m not saying you should ignore flow! No: this is no time to hole up and work in isolation, emerging after long months or years with your perfectly-polished opus. Everybody will go: huh? Who are you? And even if they don’t—even if your exquisitely-carved marble statue of Boba Fett is the talk of the tumblrs for two whole days—if you don’t have flow to plug your new fans into, you’re suffering a huge (here it is!) opportunity cost. You’ll have to find them all again next time you emerge from your cave.

(Via Ryan McCuaig)

All the Things You Love to Do

In yesterday’s Apple Keynote, Phil Schiller used almost the exact same phrase while talking about the new Retina MacBook Pros (26:40):

For all the things you love to do: Reading your mail, surfing the Web, doing productivity, and even watching movies that you’ve downloaded from iTunes.

And about the iPad (65:15):

The ability to hold the internet in your hands, as you surf the web, do email, and make FaceTime calls.

It gave me pause to think, “If my computers can already do this, why then should I be interested in these new ones?” Surf the web, read email? My computers do this just fine.

Although Macs, iOS devices, and computers in general are capable of many forms of software, people seem resigned to the fact this sort of thing, “surf the web, check email, etc” is what computers are for, and I think people are resigned to this fact because it’s the message companies keep pushing our way.

The way Apple talks about it, it almost seems like it’s your duty, some kind of chore, “Well, I need a computer because I need to do those emails, and surf those websites,” instead of an enabling technology to help you think in clearer or more powerful ways. “You’re supposed to do these menial tasks,” they’re telling me, “and you’re supposed to do it on this computer.”

This would be like seeing a car commercial where the narrator said “With this year’s better fuel economy, you can go to all the places you love, like your office and your favourite restaurants.” I may be being a little pedestrian here, but it seems to me like car commercials are often focussing on new places the car will take you to. “You’re supposed to adventure,” they’re telling me, “and you’re supposed to do it in this car.”

What worries me isn’t Apple’s marketing. Apple is trying to sell computers and it does a very good job at it, with handsome returns. What worries me is people believing “computers are for surfing the web, checking email, writing Word documents” and nothing else. What worries me is computers becoming solely commodities, with software following suit.

How do you do something meaningful with software when the world is content to treat it as they would a jug of milk?

Who Wants A Stylus?

The stylus is an overlooked and under-appreciated mode of interaction for computing devices like tablets and desktop computers, with many developers completing dismissing it without even a second thought. Because of that, we’re missing out on an entire class of applications that require the precision of a pencil-like input device which neither a mouse nor our fingers can match.

Whenever the stylus as an input device is brought up, the titular quote from Steve Jobs inevitably rears its head. “You have to get ‘em, put ‘em away, you lose ‘em,” he said in the MacWorld 2007 introduction of the original iPhone. But this quote is almost always taken far out of context (and not to mention, one from a famously myopic man — that which he hated, he hated a lot), along with his later additional quote about other devices, “If you see a stylus, they blew it.”

What most people seem to miss, however, is Steve was talking about a class of devices whose entire interaction required a stylus and couldn’t be operated with just fingers. If every part of the device needed a stylus, then it’d difficult to use single-handedly, and deadweight were you to misplace the stylus. These devices, like the Palm PDAs of yesteryear were frustrating to use because of that, but it’s no reason to outlaw the input mechanism altogether.

Thus, Steve’s myopia has spread to many iOS developers. Developers almost unanimously assert the stylus is a bad input device, but again, I believe it’s because those quotes have put in our minds an unfair black and white picture: Either we use a stylus or we use our fingers.

“So let’s not use a stylus.”

Let’s imagine for a moment or two a computing device quite a lot like one you might already own. It could be a computing device you use with a mouse or trackpad and keyboard (like a Mac) or it could be a device you use with your fingers (like an iPad). Whatever the case, imagine you use such a device on a regular basis with solely the main input devices provided with the computer like you do. But this computer has one special property: it can magically make any kind of application you can dream of, instantly. This is your Computer of the Imagination.

One day, you find a package addressed to you has arrived on your doorstep. Opening it up, you discover something you recognize, but are generally unfamiliar with. It looks quite a bit like a pencil without a “lead” tip. It’s a stylus. Beside it in the package is a note that simply says “Use it with your computer.”

You grab your Computer of the Imagination and start to think of applications you can use which could only work with your newly arrived stylus. What do they look like? How do they work?

You think of the things you’d normally do with a pencil. Writing is the most obvious one, so you get your Computer of the Imagination to make you an app that lets you write with the stylus. It looks novel because, “Hey, that’s my handwriting!” on the screen, but you soon grow tired of writing with it. “This is much slower and less accurate than using a keyboard,” you think to yourself.

Next, you try making a drawing application. This works much better, you think to yourself, because the stylus provides accuracy you just couldn’t get with your fingers. You may not be the best at drawing straight lines or perfect circles, but thankfully your computer can compensate for that. You hold the stylus in your dominant hand while issuing commands with the other.

Your Computer of the Imagination grows bored and prompts you to think of another application to use with the stylus.

You think. And think. And think…

If you’re drawing a blank, then you’re in good company. I have a hard time thinking of things I can do with a stylus because I’m thinking in terms of what I can do with a pencil. I’ve grown up drawing and writing with pencils, but doing little else. If the computer is digital paper, then I’ve pretty much exhausted what I can do with analog paper. But of course, the computer is so much more than just digital paper. It’s dynamic, it lets us go back and forth in time. It’s infinite in space. It can cover a whole planet’s worth of area and hold a whole library’s worth of information.

But what could this device do if it had a different way to interact with? I’m not claiming the stylus is new, but to most developers, it’s at least novel. What kind of doors could a stylus open up?

“Nobody wants to use a stylus.”

I thought it’d be a good idea to ask some friends of mine their thoughts on the stylus as an input device, both on how they use one today, and what they think it might be capable of in the future (note these interviews were done in July 2013, I’m just slow at getting things published).

Question: How do you find support in apps for the various styluses you’ve tried?

Joe Cieplinski: I’ve mainly used it in Paper, iDraw, and Procreate, all of which seem to have excellent support for it. At least as good as they can, given that the iPad doesn’t have touch sensitivity. In other apps that aren’t necessarily for art I haven’t tried to use the stylus as much, so can’t say for sure. Never really occurred to me to tap around buttons and such with my stylus as opposed to my finger.

Ryan Nystrom: I use a Cosmonaut stylus with my iPad for drawing in Paper. The Cosmonaut is the only stylus I use, and Paper is the only app I use it in (also the only drawing app I use). I do a lot of prototyping and sketching in it on the go. I have somewhat of an art background (used to draw and paint a lot) so I really like having a stylus over using my fingers.

Dan Leatherman: Support is pretty great for the Cosmonaut, and it’s made to be pretty accurate. I find that different tools (markers, paintbrushes, etc.) adapt pretty well.

Dan Weeks: For non-pressure sensitive stylus support it’s any app and I’ve been known to just use the stylus because I have it in my hand. Not for typing but I’ve played games and other apps besides drawing with a stylus. Those all seem to work really well because of the uniformity of the nib compared to a finger.

Question: Do you feel like there is untapped (pardon the pun) potential for a stylus as an input device on iOS? It seems like most people dismiss the stylus, but it seems to me like a tool more precise than a finger could allow for things a finger just isn’t capable of. Are there new doors you see a stylus opening up?

Joe Cieplinski: I was a Palm user for a very long time. I had various Handspring Visors and the first Treo phones as well. I remember using the stylus quite a bit in all that time. I never lost a stylus, but I did find having to use two hands instead of one for the main user interface cumbersome.

The advantage of using a stylus with a Palm device was that the stylus was always easy to tuck back into the device. One of the downsides to using a stylus with an iPad is that there’s no easy place to store it. Especially for a fat marker style stylus like the Cosmonaut.

While it’s easy to dismiss the stylus, thanks to Steve Jobs’ famous “If you see a stylus, they blew it” quote, I think there are probably certain applications that could benefit more from using a more precise pointing device. I wouldn’t ever want a stylus to be required to use the general OS, but for a particular app that had good reason for small, precise controls, it would be an interesting opportunity. Business-wise, there’s also potential there to partner up between hardware and software manufacturers to cross promote. Or to get into the hardware market yourself. I know Paper is looking into making their own hardware, and Adobe has shown off a couple of interesting devices recently.

Ryan Nystrom: I do, and not just with styli (is that a word?). I think Adobe is on to something here with the Mighty.

I think there are 2 big things holding the iPad back for professionals: touch sensitivity (i.e. how hard you are pressing) and screen thickness.

The screen is honestly too thick to be able to accurately draw. If you use a Jot or any other fine-tip stylus you’ll see what I mean: the point of contact won’t precisely correlate with the pixel that gets drawn if your viewing angle is not 90 degrees perpendicular to the iPad screen. That thickness warps your view and makes drawing difficult once you’ve removed the stylus from the screen and want to tap/draw on a particular point (try connecting two 1px dots with a thin line using a Jot).

There also needs to be some type of pressure sensitivity. If you’re ever drawing or writing with a tool that blends (pencil, marker, paint), quick+light strokes should appear lighter than slow, heavy strokes. Right now this is just impossible.

Oleg Makaed: I believe we will see more support from the developers as styluses and related technology for iPad emerge (as to me, stylus is an early stage in life of input devices for tablets). As of now, developers can focus on solving existing problems: the fluency of stylus detection, palm interference with touch surface, and such.

Tools like the Project Mighty stylus and Napoleon ruler by Adobe can be very helpful for creative minds. Nevertheless, touch screens were invented to make the experience more natural and intuitive, and stylus as a mass product doesn’t seem right. Next stage might bring us wearable devices that extend our limbs and will act in a consistent way. The finger-screen relationships aren’t perfect yet, and there is still room for new possibilities.

Dan Leatherman: I think there’s definite potential here. Having formal training in art, I’m used to using analog tools, and no app (that I’ve seen) can necessarily emulate that as well as I’d like. The analog marks made have inconsistencies, but the digital marks just seem too perfect. I love the idea of a paintbrush stylus (but I can’t remember where I saw it).

Dan Weeks: I think children are learning with fingers but that finger extensions, which any writing implement is, become very accurate tools for most people. That may just have been the way education was and how it focused on writing so much, but I think it’s a natural extension that with something you can use multiple muscles to fine tune the 3D position of you’ll get good results.

I see a huge area for children and information density. With a finger in a child-focused app larger touch targets are always needed to account for clumsiness in pointing (at least so I’ve found). I imagine school children would find it easier to go with a stylus when they’re focused, maybe running math drills or something, but for sure in gesturing without blocking their view of the screen as much with hands and arms. A bit more density on screen resulting from stylus based touch targets would keep things from being too simple and slowing down learning.

Jason: What about the stylus as something for enhancing accessibility?

Doug Russell: I haven’t really explored styluses as an accessibility tool. I could see them being useful for people with physical/motor disabilities. Something like those big ol cosmonaut styluses would probably be well suited for people with gripping strength issues.

Dan Weeks: I’ve also met one older gentleman that has arthritis such that he can stand to hold a pen or stylus but holding his finger out to point with it grows painful over time. He loves his iPad and even uses the stylus to type with.

It seems the potential for the stylus is out there. It’s a precise tool, it enhances writing and drawing skills most of us already have, and it makes for more precise and accessible software than we can get with 44pt fingertips.

Creating a software application requiring a stylus is almost unheard of in the iOS world, where most apps are unfortunately poised for the lowest common denominator of the mass market. Instead, I see the stylus as an opportunity for a new breed of specialized, powerful applications. As it stands today, I see the stylus as almost entirely overlooked.


Riccardo Mori on the Stylus. Responding to my “Who Wants a Stylus?” piece from earlier this week:

Now, as someone who handles a lot of text on lots of devices, here’s a stylus-based application I’d love to use: some sort of powerful writing environment in which I could, for example, precisely select parts of a text, highlight them, copy them out of their context and aggregate them in annotations and diagrams which could in turn maintain the link or links to the original source at all times, if needed.

Similarly, it would be wonderful if groups of notes, parts of a text, further thoughts and annotations, could be linked together simply by tracing specific shapes with the stylus, creating live dependences and hierarchies.

This is precisely the sort of thing I hoped to rouse with my essay, and I’m glad to hear the creative gears spinning. What Riccardo proposes sounds like a fantastic use of the stylus, and reminds me about what I’ve read on Doug Engelbart’s NLS system, too.

Marco Arment Representing What’s Wrong and Elitist About Software Development. Marco Arment:

But I’m not a believer that everyone should podcast, or that podcasting should be as easy as blogging. There’s actually a pretty strong benefit to it requiring a lot of effort: fewer bad shows get made, and the work that goes into a good show is so clear and obvious that the effort is almost always rewarded.

It’s fine to not believe everyone should podcast, but the concept that podcasting should not be easy, that it should be inaccessible and that it’s a good thing, is incredibly pompous and arrogant. It’s pompous and arrogant because it implies only those who have enough money to buy a good rig and enough time and effort to waste on editing (and yes, it is a waste if a better tool could do it with less time or effort) should be able to express themselves and be heard by podcasters. It says “If you can’t pay your dues, then you don’t deserve to be listened to.”

It would be like saying “blogging shouldn’t be as easy as typing in a web form, and if fewer people were able to do it, it’d make it better for everyone who likes reading blogs” (Marco, by the way, worked at Tumblr for many years), which is as absurd as it is offensive.

Podcasts, blogging, and the Web might not have been founded on meritocratic ideals, but I think it’s safe to say anyone who pays attention sees them as equalizers, that no matter how big or how small you are, you can have just as much say as anyone else. That it doesn’t always end up that way isn’t the point. The point is, these media bring us closer to an equal playing field than anything before.

Making a good podcast will never be as easy as writing text, and if you’re a podcast listener, that’s probably for the best.

Making a good podcast will never be as easy as writing text, except for the fact podcasts involve speaking, an innate human ability most of us learn around age 1, and we learn writing (a not-innate ability) at a later age. We spend many of our waking hours speaking, and few people write at any length.

If you’re a podcast listener, it’s probably for the best.

Why do We Step on Broken Glass?

Have you ever stepped barefoot on a piece of broken glass and got it stuck in your foot? It was probably quite painful and you most likely had to go to the hospital. So why did you step on it? Why do we do things that hurt us?

The answer, of course, is we couldn’t see we were stepping on a piece of glass! Perhaps somebody had smashed something the night before and thought they’d swept up all the pieces, but here you are with a piece of glass in your foot. But the leftover pieces are so tiny, you can’t even see them. If you could see them, you certainly would not have stepped on them.

Why do we do harmful things to ourselves? Why do we pollute the planet and waste its resources? Why do we fight? Why do we discriminate and hate? Why do we ignore facts and instead trust mystics (i.e., religion)?

The answer, of course, is we can’t see all the things we’re doing wrong. We can’t see how bombs and drones harm others across the world because their’s is a world different from ours. We can’t see how epidemics spread because germs are invisible, and if we’re sick then we’re too occupied to think about anything else. We can’t see how evolution or global climate change could possibly be real because we only see things on a human lifetime scale, not over thousands or hundreds of years.

Humans use inventions to help overcome the limits of our perception. Microscopes and telescopes help us see the immensely small and the immensely large, levers and pulleys help us move the massive. Books help us hear back in time.

Our inventions can help us learn more about time and space, more about ourselves and more about everyone else, if we choose, but so frequently it seems we choose not to do that. We choose to keep stepping on glass, gleefully ignorant of why it happens. “This is how the world is,” we think, “that’s a shame.”

The most flexible and obvious tool we can use to help make new inventions is of course the computer, but it’s not going to solve these problems on its own, and it’s far from the end of the road. We need to resolve to invent better ways of understanding ourselves and each other, better ways of “seeing” with all our senses that which affects the world. We need to take a big step and stop stepping on broken glass.

Understanding Software

Many of us create (and all of us use) software, but few if any of us has examined software as a medium. We bumble in the brambles blind to the properties of software, how we change it, and most importantly, how it changes us.

In this talk, I examine the medium of software, how it collectively affects us, and demonstrate what it means for new kinds of software we’re capable of making.

I will be presenting “Understanding Software” at Pivotal Labs NYC on February 25, and if you create or use software, I invite you to come.

Objective C is a Bad Language But Not For The Reasons You Think It Is: Probably; Unless You’ve Programmed With it for a While In Which Case You Probably Know Enough To Judge For Yourself Anyway: The Jason Brennan Rant

When most programmers are introduced to Objective C for the first time, they often recoil in some degree of disgust at its syntax: “Square brackets? Colons in statements? Yuck” (this is a close approximation of my first dances with the language). Objective C is a little different, a little other, so naturally programmers are loathe to like it at first glance.

I think that reaction, that claiming Objective C is bad because of its syntax results from two factors which coincided: 1. The syntax looks very alien; 2. Most learned it because they wanted to jump in on iOS development, and Apple more or less said “It’s this syntax or the highway, bub” which put perceptions precariously between a rock and a hard place. Not only did the language taste bad, developers were also forced to eat it regardless.

But any developer with even a modicum of sense will eventually realize the Objective part of Objective C’s syntax is among its greatest assets. Borrowed largely (or shamelessly copied) from Smalltalk’s message sending syntax by Brad Cox (see Object Oriented Programming: An Evolutionary Approach for a detailed look at the design decisions behind Objective C), Objective C’s message sending syntax has some strong benefits over traditional object.method() method calling syntax. It allows for later binding of messages to objects, and perhaps most practically, reading code reads like a sentence, with parameters prefaced with their purpose in the message.

Objective C’s objects are pretty flexible when compared to similar languages like C++ (compare relative fun of extending and overriding parts of classes in Objective C vs C++), and can be extended at runtime via Categories or through runtime functions (more on those soon) itself, but Objective C’s objects pale in comparison to those of a Smalltalk-like environment, where objects are always live and browsable. Though objects can be extended at runtime, they seldom are, and are instead almost exclusively fully built by compile time (that is to say, yes lots of objects are allocated during the runtime of an application, and yes some methods are added via Categories which are loaded in at runtime, but rarely are whole classes added to the application at runtime).

This compiled nature, along with the runtime functions point to the real crux of what’s wrong with Objective C: the language is still feels quite tacked on to C. The language was in fact originally built as a preprocessor to C (c.f.: Cox), but over the years has been built up a little sturdier, all the while remaining still atop C. It’s a superset of C, so all C code is considered Objective C code, which includes:

  • Header files
  • The preprocessor
  • Functions
  • Manual Memory Management (ARC automates this, but it’s still “automatic-manual”)
  • Header files
  • Classes are fancy structs (now C++ structs according to Landon Fuller)
  • Methods are fancy function pointers
  • Header files
  • All of this is jury-rigged by runtime C (and/or C++) functions

In addition, Objective C has its own proper warts, including a lack of method visibility methods (like protected, private, partytime, and public), lacks class namespacing (although curiously protocols exist in their own namespace), require method declarations for public methods, lacks a proper importing system (yes, I’m aware of @import), suffers from C’s support of library linking because it lacks its own, has header files, has a weak abstraction for dynamically sending messages (selectors are not much more than strings), must be compiled and re-run to see changes, etc.

John Siracusa has talked at length about what kinds of problems a problemmed language like Objective C can cause in his Copland 2010 essays. In short, Objective C is a liability.

I don’t see Apple dramatically improving or replacing Objective C anytime soon. It doesn’t seem to be in their technical culture, which still largely revolves around C. Apple has routinely added language features (e.g., Blocks, modules) and libraries (e.g., libdispatch) at the C level. Revolving around a language like C makes sense when you consider Apple’s performance-driven culture (“it scrolls like butter because it has to”). iOS, Grand Central Dispatch, Core Animation, WebKit are all written at the C or C++ level where runtime costs are near-non-extant, where higher level concepts like a true object system can’t belong, due to the performance goals the company is ruled by.

Apple is a product-driven company, not a computing-driven company. While there are certainly many employees interested in the future of computing, Apple isn’t the company to drive it. It’s hard to convince such a highly successful product company to operate otherwise.

So if a successor language is to emerge, it’s got to come from elsewhere. I work on programming languages professionally at Hopscotch, but I’m not convinced a better Objective C, some kind of Objective Next is the right way to go. A new old thing is not really what we need. It seems absurd that 30 years after the Mac we still build the same applications the same ways. It seems absurd we still haven’t really caught up to Smalltalk. It seems absurd beautiful graphical applications are created solely and sorely in textual, coded languages. And it seems absurd to rely on one vendor to do something about it.

Authors of ‘The Federalist Papers’ request Facebook rename ‘Paper’

In a letter received by Speed of Light postmarked February 3rd, 2014, the authors of The Federalist Papers contend Facebook’s latest iPhone app, Paper, should be renamed. The authors, appearing under the pseudonym Publius, write:

It has been frequently remarked, that it seems to have been reserved to the creators of Facebook, by their conduct and example, they might choose to appropriate the name Paper for their own devices. We would like to see that changed.

The authors, predicting the counter-argument that the name “paper” is a common noun, write:

Every story has a name. Despite the fact the word “paper” is indeed a generic term, and despite the fact the original name of our work was simply The Federalist (Papers was later appended by somebody else), we nonetheless feel because our work was published first, we are entitled to the name Paper. The Federalist Papers have been circulating for more than two centuries, so clearly, we have a right to the name.

The polemic towards Facebook seems to be impelled by Facebook’s specific choice of title and location:

It is especially insulting since Facebook has chosen to launch Paper exclusively in America, where each of its citizens is well aware and well versed in the materials of The Federalist Papers. It is as though they believe citizens will be unaware of the source material from which Facebook Paper is inspired. This nation’s citizens are active participants in the nation’s affairs, and this move by Facebook is offensive to the very concept.

Publius provide a simple solution:

We believe it is the right of every citizen of this nation to have creative freedoms and that’s why we kindly ask Facebook to be creative and not use our name.

Code Reading. Peter Seibel on code as literature:

But then it hit me. Code is not literature and we are not readers. Rather, interesting pieces of code are specimens and we are naturalists. So instead of trying to pick out a piece of code and reading it and then discussing it like a bunch of Comp Lit. grad students, I think a better model is for one of us to play the role of a 19th century naturalist returning from a trip to some exotic island to present to the local scientific society a discussion of the crazy beetles they found: “Look at the antenna on this monster! They look incredibly ungainly but the male of the species can use these to kill small frogs in whose carcass the females lay their eggs.”

I think it’s true that code is not literature, but I also think it’s kind of a bum steer to approach code like science. We investigate things in science because we have to. Nature has created the world a certain way, and there’s no way to make it understandable but to investigate it.

But code isn’t a natural phenomenon, it’s something made by people, and as such we have the opportunity (and responsibility) to make it accessible without investigation.

If we need to decode something, something that we ourselves make, I think that’s a sign we shouldn’t be encoding it in the first place.

(via Ryan McCuiag)

NSNorth 2014. I can’t believe I haven’t yet written about it, but Ottawa’s very own NSNorth is back this year and it’s looking to be better than ever (that’s saying a lot, considering I spoke at the last one!).

This year’s speaker lineup looks great, including Mattt Thompson, Jessie Char, and Don Melton.

Last year was a total blast, and Ottawa is probably Canada’s second prettiest city. You won’t regret it.

Read Speed of Light in Chronological Order

One of the features I wrote down when designing Speed of Light in 2010 was to be able to read the website in chronological order, the order in which it was originally written.

Now you can.

I wish every website had this feature.

Objective Next

A few weeks ago, I posted my lengthy thoughts about Objective C, what’s wrong with it, and what I think will happen in the future with it:

Apple is a product-driven company, not a computing-driven company. While there are certainly many employees interested in the future of computing, Apple isn’t the company to drive [replacing it]. It’s hard to convince such a highly successful product company to operate otherwise.

So if a successor language is to emerge, it’s got to come from elsewhere. […] I’m not convinced a better Objective C, some kind of Objective Next is the right way to go. A new old thing is not really what we need. It seems absurd that 30 years after the Mac we still build the same applications the same ways. It seems absurd we still haven’t really caught up to Smalltalk. It seems absurd beautiful graphical applications are created solely and sorely in textual, coded languages. And it seems absurd to rely on one vendor to do something about it.

There has been lots of talk in the weeks since I posted my article criticizing Objective C, including a post by my friend Ash Furrow, Steve Streza, Guy English, and Brent Simmons. Much of their criticisms are similar or ring true to mine, but the suggestions for fixing the ills of Objective C almost all miss the point:

We don’t need a better Objective C; we need a better way to make software. We do that in two steps: figure out what we’re trying to accomplish, and then figure out how to accomplish it. It’s simple, but nearly every post about replacing Objective C completely ignores these two steps.

I work on programming languages professionally at Hopscotch, which I mention not so I can brag about it but so I can explain this is a subject I care deeply about, something I work on every day. This isn’t just a cursory glance because I’ve had some grumbles with the language. This essay is my way of critically examining and exploring possibilities for future development environments we can all benefit from. That requires stepping a bit beyond what most Objective C developers seem willing to consider, but it’s important nonetheless.

Figure out what we’re trying to make

We think we know what we want from an Objective C replacement. We want Garbage Collection, we want better concurrency, we don’t want pointers. We want better networking, better data base, better syncing functionality. We want a better, real runtime. We don’t want header files or import statements.

These are all really very nice and good, but they’re actually putting the CPU before the horse. If you ask most developers why they want any of those things, they’ll likely tell you it’s because those are the rough spots of Objective C as it exists today. But they’ll say nothing of what they’re actually trying to accomplish with the language (hat tip to Guy English though for being the exception here).

This kind of thinking is what Alan Kay refers to as “Instrumental thinking”, where you only think of new inventions in terms of how they can allow you to do your same precise job in a better way. Personal computing software has fallen victim to instrumental thinking routinely since its inception. A word processor’s sole function is to help you layout your page better, but it does nothing to help your writing (orthography is a technicality).

Such is the same with the thinking that goes around with replacing Objective C. Almost all the wishlists for replacements simply ask for wrinkles to be ironed out.

If you’re wondering what such a sandpapered Objective Next might look like, I’ll point you to one I drew up in early 2013 (while I too was stuck in the instrumental thinking trap, I’ll admit).

It’s easy to get excited about the (non-existing) language if you’re an Objective C programmer, but I’m imploring the Objective C community to try and think beyond a “new old-thing”, try to actually think of something that solves the bigger pictures.

When thinking about what could really replace Objective C, then, it’s crucial to clear your mind of the minutia and dirt involved in how you program today, and try to think exclusively of what you’re trying to accomplish.

For most Objective C developers, we’re trying to make high quality software, that looks and feels great to use. We’re looking to have a tremendous amount of delight and polish to our products. And hopefully, most importantly, we’re trying to build software to significantly improve the lives of people.

That’s what we want to do. That’s what we want to do better. The problem isn’t about whether or not our programming language has garbage collection, the problem is whether or not we can build higher quality software in a new environment than we could with Objective C’s code-wait-run cycle.

In the Objective C community, “high quality software” usually translates to visually beautiful and fantastically useable interfaces. We care a tremendous amount about how our applications are presented and understood by our users, and this kind of quality takes a mountain of effort to accomplish. Our software is usually developed by a team of programmers and a team of designers, working in syncopation to deliver on the high quality standards we’ve set for ourselves. More often than not, the programers become the bottleneck, if only because every other part of the development team must ultimately have their work funnelled through code at some point. This causes long feedback loops in development time, and if it’s frustrating to make and compare improvements to the design, it is often forgone altogether.

This strain trickles down to the rest of the development process. If it’s difficult to experiment, then it’s difficult to imagine new possibilities for what your software could do. This, in part, reinforces our instrumental thinking, because it’s usually just too painful to try and think outside the box. We’d never be able to validate our outside box thinking even if we wanted to! And thus, this too strains our ability to build software that significantly enhances the lives of our customers and users.

With whatever Objective C replacement there may be, whether we demand it or we build it ourselves, isn’t it best to think not how to improve Objective C but instead how make the interaction between programmer and designer more cohesive? Or how to shift some of the power (and responsibility) of the computer out of the hands of the programmer and into the arms of the designer?

Something as simple as changing the colour or size of text should not be the job of the programmer, not because the programmer is lazy (which is mostly certainly probably true anyway) but because these are issues of graphic design, of presentation, which the designer is surely better trained and more equipped to handle. Yet this sort of operation is almost agonizing from a development perspective. It’s not that making these changes are hard, but that it often requires the programmer to switch tasks, when and only when there is time, and then present changes to the designer for approval. This is one loose feedback loop and there’s no real good reason why it has to be this way. It might work out pretty well the other way.

Can you think of any companies where design is paramount?

When you’re thinking of a replacement for Objective C, remember to think of why you want it in the first place. Think about how we can make software in better ways. Think about how your designers can improve your software if they had more access to it, or how you could improve things if you could only see them.

This is not just about a more graphical environment, and it’s not just about designers playing a bigger role. It’s about trying to seek out what makes software great, and how our development tools could enable software to be better.

How do we do it?

If we’re going to build a replacement programming environment for Objective C, what’s it going to be made of? Most compiled languages can be built with LLVM quite handily these days—


We’ve absolutely got to stop and check more of our assumptions first. Why do we assume we need a compiled language? Why not an interpreted language? Objective C developers are pretty used to this concept, and most developers will assert compiled languages are faster than interpreted or virtual machine language (“just look at how slow Android is, this is because it runs Java and Java runs in a VM,” we say). It’s true that compiled apps are almost always going to be faster than interpreted apps, but the difference isn’t substantial enough to close the door on them so firmly, so quickly. Remember, today’s iPhones are just-as-fast-if-not-faster than a pro desktop computer ten years ago, and those ran interpreted apps just fine. While you may be able to point at stats and show me that compiled apps are faster, in practice the differences are often negligible, especially with smart programmers doing the implementation. So lets keep the door open on interpreted languages.

Whether compiled or interpreted, if you’re going to make a programming language then you definitely need to define a grammar, work on a parser, and—


Again, we’ve got to stop and check another assumption. Why make the assumption that our programming environment of the future must be textual? Lines of pure code, stored in pure text files, compiled or interpreted, it makes little difference. Is that the future we wish to inherit?

We presume code whenever we think programming, probably because it’s all most of us are ever exposed to. We don’t even consider the possibility that we could create programs without typing in code. But with all the abundance and diversity of software, both graphical and not, should it really seem so strange that software itself might be created in a medium other than code?

“Ah, but we’ve tried that and it sucked,” you’ll say. For every sucky coded programming language, there’s probably a sucky graphical programming language too. “We’ve tried UML and we’ve tried Smalltalk,” you’ll say, and I’ll say “Yes we did, 40 years of research and a hundred orders of magnitude ago, we tried, and the programming community at large decided it was a capital Bad Idea.” But as much as times change, computers change more. We live in an era of unprecedented computing power, with rich (for a computer) displays, ubiquitous high speed networking, and decades of research.

For some recent examples of graphical programming environments that actually work, see Bret Victor’s Stop Drawing Dead Fish and Drawing Dynamic Visualizations talks, or Toby Schachman’s (of Recursive Drawing fame) excellent talk on opening up programming to new demographics by going visual. I’m not saying any one of these tools, as is, are a replacement for Objective C, but I am saying these tools demonstrate what’s possible when we open our eyes, if only the tiniest smidge, and try to see what software development might look like beyond coded programming languages.

And of course, just because we should seriously consider non-code-centric languages doesn’t mean that everything must be graphical either. There are of course concepts we can represent linguistically which we can’t map or model graphically, so to completely eschew a linguistic interface to program creation would be just as absurd as completely eschewing a graphical interface to program creation in a coded language.

The benefits for even the linguistic parts of a graphical programming environment are plentiful. Consider the rich typographic language we forego when we code in plain text files. We lose the benefits of type choices, of font sizes and weight, hierarchical groupings. Even without any pictures, think how much more typographically rich a newspaper is compared to a plain text program file. In code, we’re relegated to fixed-width, same size and weight fonts. We’re lucky if we get any semblance of context from syntax highlighting, and it’s often a battle to impel programmers to use whitespace to create ersatz typographic hierarchies in code. Without strong typography, nothing looks any more important than anything else as you gawk at a source code file. Experienced code programmers can see what’s important, but they’re not seeing it with their eyes. Why should we, the advanced users of advanced computers, be working in a medium that’s less visually rich than even the first movable type printed books, five centuries old?

And that’s to say nothing of other graphical elements. Would you like to see some of my favourite colours? Here are three: #fed82f, #37deff, #fc222f. Aren’t they lovely? The computer knows how to render those colours better than we know how to read hex, so why doesn’t it do that? Why don’t we demand this of our programming environment?

Objective: Next

If we’re ever going to get a programming environment of the future, we should make sure we get one that’s right for us and our users. We should make sure we’re looking far down the road, not at our shoes. We shouldn’t try to build a faster horse, but we should instead look where we’re really trying to go and then find the best way to get there.

We also shouldn’t rely on one company to get us there. There’s still plenty to be discovered by plenty of people. If you’d like to help me discover it, I’d love to hear from you.

Assorted Followup, Explanation, And Afterthoughts Regarding ‘Objective Next’

On Monday, I published an essay exploring some thoughts about a replacement for Objective C, how to really suss out what I think would benefit software developers the most, and how we could go about implementing that. Gingerly though I pranced around certain details, and implore though I did for developers not to get caught up on certain details, alas many were snagged on some of the lesser important parts of the essay. So, I’d like to, briefly if I may, attempt to clear some of those up.

What We Make

If there was one underlying theme of the essay, it was “Don’t be trapped by Instrumental Thinking”, that particularly insidious kind of thinking that plagues us all (myself included) to thinking about new ideas or technologies only in terms of what we’re currently doing. That is, we often can only see or ask for a new tool to benefit exactly the same job we’re currently doing, where instead we should consider new kinds of things it might enable us to do.

Word processors are a prime example of this. When the personal computer revolution began, it was aided largely by the word processor — essentially a way to automatically typeset your document. The document — the content of what you produced — was otherwise identical, but the word processor made your job of typesetting much easier.

Spreadsheets, on the other hand, were something essentially brand new that emerged from the computer. Instead of just doing an old analog task, but better (as was the case with the word processor), spreadsheets allowed users to do something they just couldn’t do otherwise without the computer.

The important lesson of the spreadsheet, the one I’m trying to get at, is that it got to the heart of what people in business wanted to do: it was a truly new, flexible, and better way to approach data, like finances, sales, and other numbers. It wasn’t just paper with the kinks worked out, it wasn’t just a faster horse, it was a real, new thing that solved their problems in better ways.

When talking in terms of Objective C development, I don’t mean “I’m dreaming of a replacement that’ll just let you create the exact same identical apps, it’ll just have fewer warts,” but I instead mean I’m dreaming of a new, fundamental way to approach building software, that will result in apps richer in the things we care about, like visual and graphic design, usability and interaction, polish, and yes, offer enhancements to the undercarriage, too.

It’s far from being just about a pretty interface, it’s about rethinking what we’re even trying to accomplish. We’re trying to make software that’s understandable, that’s powerful, that’s useful, and that will benefit both our customers and ourselves. And while I think we might eventually get there if we keep trotting along as we’re currently doing, I think we’re also capable of leaping forward. All it takes is some imagination and maybe a pinch of willingness.

Graphical Programming

When “graphical programming” is brought up around programmers the lament is palpable. To most, graphical programming translates literally into “pretty boxes with lines connecting them” something akin to UML, where the “graphical” part of programming is actually just a way for the graphics to represent code (but please do see GRAIL or here, a graphical diagramming tool designed in the late 1960s which still spanks the pants off most graphical coding tools today). This is not what I consider graphically programming to be. This is, at best, graphical coding, to which I palpably lament in agreement.

When I mention “graphical programming” I mean creating a graphical program (e.g., a view with colours and text) in a graphical way, like drawing out rectangles, lines, and text as you might do in Illustrator (see this by Bret Victor (I know, didn’t expect me to link to him right?) for probably my inspiration for this). When most people hear graphical programming, they think drawing abstract boxes (that probably generate code, yikes), but what I’m talking about is drawing the actual interface, as concretely as possible (and then abstracting the interface for new data).

There are loads of crappy attempts at the former, and very few attempts at all at the latter. There’s a whole world waiting to be attempted.

Interface Builder

Interface Builder is such an attempt at drawing out your actual, honest to science, interface in a graphical way, and it’s been moderately successful, but I think the tool falls tremendously short. Your interfaces unfortunately end up conceptually the same as mockups (“How do I see this with my real application data? How do I interact with it? How come I can only change certain view properties, but not others, without resorting to code?”). These deficiencies arise because IB is a graphical tool in a code-first town. Although it abides, IB is a second-class citizen so far as development tools go. Checkboxes for interface features get added essentially at Apple’s whim.

What we need is a tool where designing an interface means everything interface related must be a first-class citizen.

Compiled vs Interpreted

Oh my goodness do I really have to go there? After bleating for so many paragraphs about considering thinking beyond precisely what must be worked on right-here-and-now, so many get caught up on the Compiled-v-Interpreted part.

Just to be clear, I understand the following (and certainly, much more):

  1. Compiled applications execute faster than interpreted ones.
  2. Depending on the size of the runtime or VM, an interpreted language consumes more memory and energy than a compiled language.
  3. Some languages (like Java) are actually compiled to a kind of bytecode, which is then executed in a VM (fun fact: I used to work on the JVM at IBM as a co-op student).

All that being said, I stand by my original assertion that for the vast majority of the kinds of software most of us in the Objective C developer community build, the differences between the two styles of language in terms of performance are negligible, not in terms of absolute difference, but in terms of what’s perceptible to users. And that will only improve over time, as phones, tablets, and desktop computers all amaze our future selves by how handily they run circles around what our today selves even imagined possible.

If I leave you with nothing else, please just humour me about all this.

How Much Programming Language is Enough?. Graham Lee on the “size” of a programming language:

What would a programming tool suitable for experts (or the proficient) look like? Do we have any? Alan Kay is fond of saying that we’re stuck with novice-friendly user experiences, that don’t permit learning or acquiring expertise:

There is the desire of a consumer society to have no learning curves. This tends to result in very dumbed-down products that are easy to get started on, but are generally worthless and/or debilitating. We can contrast this with technologies that do have learning curves, but pay off well and allow users to become experts (for example, musical instruments, writing, bicycles, etc. and to a lesser extent automobiles).

Perhaps, while you could never argue that common programming languages don’t have learning curves, they are still “generally worthless and/or debilitating”. Perhaps it’s true that expertise at programming means expertise at jumping through the hoops presented by the programming language, not expertise at telling a computer how to solve a problem in the real world.

I wouldn’t argue that about programming languages. Aside from languages which are purposefully limited in scope or in target (Logo and Hopscotch come to mind), I think most programming languages aren’t tremendously different in terms of their syntax or capability.

Compare Scheme with Java. Although Java does have more syntax than Scheme, it’s not really that much more in the grand scheme (sorry) of things. Where languages really do differ in power is in libraries, but then that’s really just a story of “Who’s done the work, me or the vendor?”

I don’t think languages need the kitchen sink, but I do think languages need to be able to build the kitchen sink.

Intuitive is the Enemy of Good. Graham Lee:

This is nowhere more evident than in the world of the mobile app. Any one app comprises a very small number of very focussed, very easy to use features. This has a couple of different effects. One is that my phone as a whole is an incredibly broad, incredibly shallow experience.

I think Graham is very right here (and it’s not just limited to mobile, either, but it’s definitely most obvious there). It’s so hard to make software that actually, truly, does something useful for a person, to help them understand and make decisions, because we have to focus so much on the lowest common denominator.

We see those awesome numbers of how many iOS devices there are in the wild, and we think “If I could just get N% of those users, I’d have a tone of money” and it’s true, but it means you’ve also got to appeal to a huge population of users. You have to go broad instead of deep. The amount of time someone spends in your software is often measured in seconds. How do you do much of anything meaningful in seconds? 140 characters? Six seconds of video?

And with an audience so broad and an application so generic, you can’t expect to charge very much for it. This is why anything beyond $1.99 is unthinkable in the App Store (most users won’t pay anything at all).

Smalltalk Didn’t Fail

Whenever anybody brings up the subject of creating software in a graphical environment, Smalltalk inevitably comes up. Since I’ve been publishing lots lately about such environments, I’ve been hearing lots of talk about Smalltalk, too. The most common response I hear is something along the lines of

You want a graphical environment? Well kid, we tried that with Smalltalk years ago and it failed, so it’s hopeless.

Outside of some select financial markets, Smalltalk is not used much for professional software development, but Smalltalk didn’t fail. In order to fail, a technology must attempt, but remain unsuccessful at achieving its goals. But when developers grunt that “Smalltalk failed”, they are saying, unaware of it themselves, that Smalltalk has failed for their goals. The goal of Smalltalk, as we’ll see, wasn’t really so much a goal as it was a vision, one that is still successfully being followed to this day.

There is a failure

But the failure is that of the software development community at large to do their research and to understand technologies through the lens of their creators instead trying to look at history in today’s terms.

The common gripes against Smalltalk are that it’s an image-based environment, which doesn’t abide well to source control management, and that these images are large and cumbersome for distribution and sharing. It’s true, a large image-based memory dump doesn’t work too well with Git, and on the whole Smalltalk doesn’t fit too well with our professional software development norms.

But it should be plain to anyone who’s done even the slightest amount of research on the topic that Smalltalk was never intended to be a professional software development environment. For a brief history, see Alan Kay’s Early History of Smalltalk, John Markoff’s What the Dormouse Said or Michael Hiltzik’s Dealers of Lightning. Although Xerox may have attempted to push Smalltalk as a professional tool after the release of Smalltalk 80, it’s obvious from the literature this was not the original intent of Smalltalk’s creators in its formative years during PARC.

A Personal Computer for Children of All Ages

The genesis of Smalltalk, its raison d’être, was to be the software running on Alan Kay’s Dynabook vision. In short, Alan saw the computer as a personal, dynamic medium for learning, creativity, and expression, and created the concept of the Dynabook to pursue that vision. He knew the ways the printing press and literacy revolutionized the modern world, and imagined what a book would be like if it had all the brilliance of a dynamic computer behind it.

Smalltalk was not then designed as a way for professional software development to take place, but instead as a general purpose environment in which “children of all ages” could think and reason in a dynamic environment. Smalltalk never started out with an explicit goal, but was instead a vehicle to what’s next on the way to the Dynabook vision.

In this regard, Smalltalk was quite successful. As a general professional development environment, Smalltalk is not the best, but as a language designed to create a dynamic medium of expression, Smalltalk was and is highly successful. See Alan give a demo of a modern, Smalltalk-based system for an idea how simple it is for a child to work with powerful and meaningful tools.

The Vehicle

Smalltalk and its descendants are far from perfect. They represent but one lineage of tools created with the Dynabook vision in mind, but they of course do not have to be the final say in expressive, dynamic media for a new generation. But whether you’re chasing that vision or just trying to understand Smalltalk as a development tool, it’s crucial to not look at it as how it fails at your job, but how your job isn’t what it’s trying to achieve in the first place.

From Here You Can See Everything. I’ve linked to this before, but I think it’s worth reading periodically. Particularly,

Almost every American I know does trade large portions of his life for entertainment, hour by weeknight hour, binge by Saturday binge, Facebook check by Facebook check. I’m one of them. In the course of writing this I’ve watched all 13 episodes of House of Cards and who knows how many more West Wing episodes, and I’ve spent any number of blurred hours falling down internet rabbit holes. All instead of reading, or writing, or working, or spending real time with people I love.

Much ado about the iPad. Nice survey and commentary on the new ‘iPad is doomed’ meme. Riccardo Mori:

Again, here’s this urge to find the iPad some specific purpose, some thing it can do better than this device category or that other device category otherwise it’ll fade away.

If we want the iPad to be better at something, the answer is in the software, of course. Software truly optimised for the iPad. Software truly specialised for the iPad.

What I wonder is where are all the apps you spend at least one whole hour doing the same thing (other than “consuming” like you would in Safari, Netflix, or Twitter. I mean something real). Obviously I think Hopscotch is a candidate, but what else?

We need apps daring enough to be measured beyond “minutes of engagement” and we need developers daring enough to build them.

A Sheer Torment of Links. Riccardo Mori:

In other words, people don’t seem to stay or at least willing to explore more when they arrive on a blog they probably never saw before. I’m surprised, and not because I’m so vain to think I’m that charismatic as to retain 90% of new visitors, but by the general lack of curiosity. I can understand that not all the people who followed MacStories’ link to my site had to like it or agree with me. What I don’t understand is the behaviour of who liked what they saw. Why not return, why not decide to keep an eye on my site?

I’ve thought a lot about this sort of thing basically the whole time I’ve been running Speed Of Light (just over four years now, FYI) and although I don’t consider myself to be any kind of great writer, I’ve always been a little surprised by the lack of traffic the site gets, even after some articles getting linked from major publications.

On any given day, a typical reader of my site will probably see a ton of links from Twitter, Facebook, an RSS feed, or a link site they read. Even if the content on any of those websites is amazing, a reader probably isn’t going to spend too much time hanging around, because there are forty or fifty other links for them to see today.

This is why nobody sticks around. This is why readers bounce. It’s why we have shorter, more superficial articles instead of deep essays. It’s why we have tl;dr. The torrent of links becomes a torment of links because we won’t and can’t stay on one thing for too long.

And it also poses moral issues for writers (or for me, at least). I know there’s a deluge, and every single thing I publish on this website contributes to that. But the catch is the way to get more avid readers these days is to publish copiously. The more you publish, the more people read, the more links you get, the more people become subscribers. What are we to do?

I don’t have a huge number of readers, but those who do read the site I respect tremendously. I’d rather have fewer, but more thoughtful readers who really care about what I write, than more readers who visit because I post frequent-but-lower-quality articles. I’d rather write long form, well-researched, thoughtful essays than entertaining posts. I know most won’t sit through more than three paragraphs but those aren’t the readers I’m after, anyway.

The Shallows by Nicholas Carr. Related to the previous post, I recently read Nicholas Carr’s “The Shallows” and I can’t recommend it enough. From the publisher:

As we enjoy the Net’s bounties, are we sacrificing our ability to read and think deeply?

Now, Carr expands his argument into the most compelling exploration of the Internet’s intellectual and cultural consequences yet published. As he describes how human thought has been shaped through the centuries by “tools of the mind”—from the alphabet to maps, to the printing press, the clock, and the computer—Carr interweaves a fascinating account of recent discoveries in neuroscience by such pioneers as Michael Merzenich and Eric Kandel. Our brains, the historical and scientific evidence reveals, change in response to our experiences. The technologies we use to find, store, and share information can literally reroute our neural pathways.

It’s a well-researched book about how the computers — and the internet in general — physically alter our brains and cause us to think differently. In this case, we think more shallowly because we’re continuously zipping around links and websites, and we can’t focus as well as we could when we were a more literate society. Deep reading goes out the browser window, as it were.

You should read it.

“Amusing Ourselves to Death”. While I’m telling you what to do, I think everyone should read Neil Postman’s “Amusing Ourselves to Death.” From Wikipedia:

The essential premise of the book, which Postman extends to the rest of his argument(s), is that “form excludes the content,” that is, a particular medium can only sustain a particular level of ideas. Thus Rational argument, integral to print typography, is militated against by the medium of television for the aforesaid reason. Owing to this shortcoming, politics and religion are diluted, and “news of the day” becomes a packaged commodity. Television de-emphasises the quality of information in favour of satisfying the far-reaching needs of entertainment, by which information is encumbered and to which it is subordinate.

America was formed as, and made possible by, a literate society, a society of readers, when presidential debates took five hours. But television (and other electronic media) erode many of the modes in which we (i.e., the world, not just America) think.

If you work in media (and software developers, software is very much a medium) then you have a responsibility to read and understand this book. Your local library should have a copy, too.

Legible Mathematics. Absolutely stunning and thought-provoking essay on a new interface for math as a method of experimenting with new interfaces for programming.

I Tell You What I’d Do: Two Apps at the Same Time

Today’s iOS-related rumour is about iOS 8 having some kind of split screen functionality. From 9to5Mac:

In addition to allowing for two iPad apps to be used at the same time, the feature is designed to allow for apps to more easily interact, according to the sources. For example, a user may be able to drag content, such as text, video, or images, from one app to another. Apple is said to be developing capabilities for developers to be able to design their apps to interact with each other. This functionality may mean that Apple is finally ready to enable “XPC” support in iOS (or improved inter-app communication), which means that developers could design App Store apps that could share content.

Although I have no sources of my own, I wouldn’t bet against Mark Gurman for having good intel on this. It seems likely that this is real, but I think it might end up being a misunderstanding of problems users are actually trying to solve.

It’s pretty well-known most users have struggled with the “windowed-applications” interface paradigm, where there can be multiple, overlapping windows on screen at once. Many users get lost in the windows and end up devoting too much time to managing the windows than actually getting to work. So iOS is mostly a pretty great step forward in this regard. Having two “windows” of apps open at once would be a step back to the difficulties found on the desktop. And just because the windows on iOS 8 might not overlap, there’s still two different apps to multitask with — something else pretty well known to cause strife in people.

Having multiple windows seems like a kind of “faster horse,” a way to just repurpose the “old way” of doing something instead of trying to actually solve the problem users are having. In this case, the whole impetus for showing multiple windows or “dragging and dropping between apps” is to share information between applications.

Users writing an email might want details from a website, map, or restaurant app. Users trying to IM somebody might want to share something they’ve just seen or made in another app. Writers might want to refer to links or page contents from a Wikipedia app. These sorts of problems can all be solved by juxtaposing app windows side by side, but to me it seems like a cop-out.

A better solution would be to share the data between applications, through some kind of system service. Instead of drag and drop, or copy and paste (both are essentially the same thing), objects are implicitly shared across the system. If you are looking at a restaurant in one app, then switch to a maps app, that map should show the restaurant (along with any other object you’ve recently seen with a location). When you head to your calendar, it should show potential mealtimes (with the contact you’re emailing with, of course).

This sort of “interaction” requires thinking about the problem a little differently, but it’s advantageous because it ends up skipping most of the interaction users actually have to do in the first place. Users don’t need to drag and drop, they don’t need to copy and paste, and they don’t need to manage windows. They don’t need to be overloaded with information of seeing too many apps on screen at once.

I’ve previously talked about this, and my work on this problem is largely inspired by a section in Magic Ink. It’s sort of a “take an object; leave an object” kind of system, where applications can send objects to the system service, and others can request objects from the system (and of course, applications can provide feedback as to which objects should be shown and which should be ignored).

I don’t expect Apple to do this in iOS 8, but I do hope somebody will consider it.

MIT Invents A Shapeshifting Display You Can Reach Through And Touch. First just let me say the work done by the group is fantastic and a great step towards a dynamic physical medium, much like how graphical displays are dynamic visual media. This is an important problem.

What I find troubling, however, is the notion that this sort of technology should be used to mimic the wrong things:

But what really interests the Tangible Media Group is the transformable UIs of the future. As the world increasingly embraces touch screens, the pullable knobs, twisting dials, and pushable buttons that defined the interfaces of the past have become digital ghosts.

Buttons and knobs! Have we learned nothing from our time with dynamic visuals? Graphical buttons and other “controls” on a computer screen already act like some kind of steampunk interface. We’ve got buttons and sliders and knobs and levers, most of which are not appropriate from computer tasks but which we do because we’re stuck in a mechanical mindset. If we’re lucky enough to be blessed with a dynamic physical interface, why should we similarly squander it?

Hands are super sensitive and super expressive (read John Napier’s book about them and think about how you hold it as you read). They can form powerful or gentle grips and they can switch between them almost instantly. They can manipulate and sense pressure, texture, and temperature. They can write novels and play symphonies and make tacos. Why would we want our dynamic physical medium to focus on anything less?

(via @daveverwer)

Seymour Papert: Situating Constructionism. Seymour Papert and Idit Harel in an introduction to their book, discussing ways of approaching learning:

But the story I really want to tell is not about test scores. It is not even about the math/Logo class. (3) It is about the art room I used to pass on the way. For a while, I dropped in periodically to watch students working on soap sculptures and mused about ways in which this was not like a math class. In the math class students are generally given little problems which they solve or don’t solve pretty well on the fly. In this particular art class they were all carving soap, but what each students carved came from wherever fancy is bred and the project was not done and dropped but continued for many weeks. It allowed time to think, to dream, to gaze, to get a new idea and try it and drop it or persist, time to talk, to see other people’s work and their reaction to yours–not unlike mathematics as it is for the mathematician, but quite unlike math as it is in junior high school. I remember craving some of the students’ work and learning that their art teacher and their families had first choice. I was struck by an incongruous image of the teacher in a regular math class pining to own the products of his students’ work! An ambition was born: I want junior high school math class to be like that. I didn’t know exactly what “that” meant but I knew I wanted it. I didn’t even know what to call the idea. For a long time it existed in my head as “soap-sculpture math.”

It’s beginning to seem to me like constructionist learning is great, but also that we need many different approaches to learning, like atoms oscillating, so that the harmonics of learning can better emerge.

They were using this high-tech and actively computational material as an expressive medium; the content came from their imaginations as freely as what the others expressed in soap. But where a knife was used to shape the soap, mathematics was used here to shape the behavior of the snake and physics to figure out its structure. Fantasy and science and math were coming together, uneasily still, but pointing a way. LEGO/Logo is limited as a build-an-animal-kit; versions under development in our lab will have little computers to put inside the snake and perhaps linear activators which will be more like muscles in their mode of action. Some members of our group have other ideas: Rather than using a tiny computer, using even tinier logic gates and motors with gears may be fine. Well, we have to explore these routes (4). But what is important is the vision being pursued and the questions being asked. Which approach best melds science and fantasy? Which favors dreams and visions and sets off trains of good scientific and mathematical ideas?

I think the biggest problem still faced by Logo is (like Smalltalk) its success. Logo is highly revered as an educational language, so much so that its methods are generally accepted as “good enough” and not readily challenged. The unfortunate truth is twofold:

  1. In order for Logo to be successful as a general creative medium for learning, there are many other factors which must also be worked on, such as teacher/school acceptance (this is of course no easy feat and no fault of Logo’s designers, it’s just an unfortunate truth. Papert discusses it somewhat in The Children’s Machine).

  2. Logo just hasn’t taken the world by storm. Obviously these things take time, but the implicit assumption seems to be “Logo is done, now the world needs to catch up to it.”

“Good enough” tends to lead us down paths prematurely, when instead we should be pushing further. That’s why most programming languages look like Smalltalk and C. Those languages worked marvelously for their original goals, but they’re far from being the pinnacle of possibility. If Logo were invented today, what could it look like today (*future-referencing an ironic project of mine*)?

Computer-aided instruction may seem to refer to method rather than content, but what counts as a change in method depends on what one sees as the essential features of the existing methods. From my perspective, CAI amplifies the rote and authoritarian character that many critics see as manifestations of what is most characteristic of–and most wrong with–traditional school. Computer literacy and CAI, or indeed the use of word-processors, could conceivably set up waves that will change school, but in themselves they constitute very local innovations–fairly described as placing computers in a possibly improved but essentially unchanged school. The presence of computers begins to go beyond first impact when it alters the nature of the learning process; for example, if it shifts the balance between transfer of knowledge to students (whether via book, teacher, or tutorial program is essentially irrelevant) and the production of knowledge by students. It will have really gone beyond it if computers play a part in mediating a change in the criteria that govern what kinds of knowledge are valued in education.

This is perhaps the most damning and troublesome facet of computers for their use in pushing humans forward. Computers are so good at simulating old media that it’s essentially all we do with them. Doing old media is easy, as we don’t have to learn any new skills. We’ve evolved to go with the familiar, but I think it’s time we dip our toes into something a little beyond.

What Do We Save When We Save the Internet?. Ian Bogost in a blistering look at today’s internet and Net Neutrality:

“We believe that a free and open Internet can bring about a better world,” write the authors of the Declaration of Internet Freedom. Its supporters rise up to decry the supposedly imminent demise of this Internet thanks to FCC policies poised to damage Network Neutrality, the notion of common carriage applied to data networks.

Its zealots paint digital Guernicas, lamenting any change in communication policy as atrocity. “If we all want to protect universal access to the communications networks that we all depend on to connect with ideas, information, and each other,” write the admins of Reddit in a blog post patriotically entitled Only YOU Can Protect Net Neutrality, “then we must stand up for our rights to connect and communicate.”


What is the Internet? As Evgeny Morozov argues, it may not exist except as a rhetorical gimmick. But if it does, it’s as much a thing we do as it is an infrastructure through which to do it. And that thing we do that is the Internet, it’s pockmarked with mortal regret:

You boot a browser and it loads the Yahoo! homepage because that’s what it’s done for fifteen years. You blink at it and type a search term into the Google search field in the chrome of the browser window instead.

Sitting in front of the television, you grasp your iPhone tight in your hand instead of your knitting or your whiskey or your rosary or your lover.

The shame of expecting an immediate reply to a text or a Gchat message after just having failed to provide one. The narcissism of urgency.

The pull-snap of a timeline update on a smartphone screen, the spin of its rotary gauge. The feeling of relief at the surge of new data—in Gmail, in Twitter, in Instagram, it doesn’t matter.

The gentle settling of disappointment that follows, like a down duvet sighing into the freshly made bed. This moment is just like the last, and the next.

You close Facebook and then open a new browser tab, in which you immediately navigate back to Facebook without thinking.

The web is a brittle place, corrupted by advertising and tracking (see also “Is the Web Really Free?”). I won’t spoil the ending but I’m at least willing to agree with his conclusion.

Remote Chance

I recently stumbled across an interesting 2004 project called Glancing, whose basic principle is that of replicating the subtle social cues of personal, IRL office relationships like eye contact, nodding, etc. but for people using computers not in the same physical location.

The basic gist (as I understand it) is people, when in person, don’t merely start talking to one another but first have an initial conversation through body language. We glance at each other and try to make eye contact before actually speaking, hoping for the glance to be reciprocated. In this way, we can determine whether or not we should even proceed with the conversation at all, or if maybe the other person is occupied. Matt Webb’s Glancing exists as a way to bridge that gap with a computer (read through his slide notes, they’re detailed but aren’t long). You can look up at your screen and see who else has recently “looked up” too.

Remote work is a tricky problem to solve. We do it occasionally at Hopscotch when working from home, and we’re mostly successful at it, but as a friend of mine recently put it, it’s harder to have a sense of play when experimenting with new features. There is an element of collaboration, of jamming together (in the musical sense) that’s lacking when working over a computer.

Maybe there isn’t really a solution to it and we’re all looking at it the wrong way. Telecommuting has been a topic of research and experimentation for decades and it’s never fully taken off. It’s possible, like Neil Postman suggests in Technopoly that ours is a generation that can’t think of a solution to a problem outside of technology and that maybe this kind of collaboration isn’t compatible with technology. I see that as a possibility.

But I also think there’s a remote chance we’re trying to graft on collaboration as an after-the-fact feature to non-collaborative work environments. I work in Xcode and our designer works in Sketch, and when we collaborate, neither of our respective apps are really much involved. Both apps are designed with a single user in mind. Contrast this with Doug Engelbart and SRI’s NLS system, built from the ground up with multi-person collaboration in mind, and you’ll start to see what I mean.

NLS’s collaboration features seem, in today’s world at least, like screen sharing with multiple cursors. But it extends beyond that, because the whole system was designed to support multiple people using it from the get-go.

How do we define play, how do we jam remotely with software?

String Constants. Brent Simmons on string constants:

I know that using a string constant is the accepted best practice. And yet it still bugs me a little bit, since it’s an extra level of indirection when I’m writing and reading code. It’s harder to validate correctness when I have to look up each value — it’s easier when I can see with my eyes that the strings are correct.[…]

But I’m exceptional at spotting typos. And I almost never have cause to change the value of a key. (And if I did, it’s not like it’s difficult. Project search works.)

I’m not going to judge Brent here on his solution, but it seems to me like this problem would be much better solved by using string constants provided Xcode actually showed you the damn values of those constants in auto-complete.

When developers resort to crappy hacks like this, it’s a sign of a deficiency in the tools. If you find yourself doing something like this, you shouldn’t resort to tricks, you should say “I know a computer can do this for me” and you should demand it. (rdar://17668209)

Step Away from the Kool-Aid. Ben Howell on startups and compensation:

Don’t, under any circumstances work for less than market rate in order to build other peoples fortunes. Simply don’t do it. Cool product that excites you so in-turn you’ll work for a fraction of the market rate? Call that crap out for what it is. A CEO of a company asking you to help build his fortune while at the same time returning you squat.

Doomed to Repeat It. A mostly great article by Paul Ford about the recycling of ideas in our industry:

Did you ever notice, wrote my friend Finn Smith via chat, how often we (meaning programmers) reinvent the same applications? We came up with a quick list: Email, Todo lists, blogging tools, and others. Do you mind if I write this up for Medium?

I think the overall premise is good but I do have thoughts on some of it. First, he claims:

[…] Doug Engelbart’s NLS system of 1968, which pioneered a ton of things—collaborative software, hypertext, the mouse—but deep, deep down was a to-do list manager.

This is a gross misinterpretation of NLS and of Engelbart’s motivations. While the project did birth some “productivity” tools, it was much more a system for collaboration and about Augmenting Human Intellect. A computer scientist not understanding Engelbart’s work would be like a physicist not understanding Isaac Newton’s work.

On to-do lists, I think he gets closest to the real heart of what’s going on (emphasis mine):

The implications of a to-do list are very similar to the implications of software development. A task can be broken into a sequence, each of those items can be executed in turn. Maybe programmers love to do to-do lists because to-do lists are like programs.

I think this is exactly it. This is “the medium is the message” 101. Of course programmers are going to like sequential lists of instructions, it’s what they work in all day long! (Exercise for the reader: what part of a programmer’s job is like email?)

His conclusion is OK but I think misses the bigger cause:

Very little feels as good as organizing all of your latent tasks into a hierarchical lists with checkboxes associated. Doing the work, responding to the emails—these all suck. But organizing it is sweet anticipatory pleasure.

Working is hard, but thinking about working is pretty fun. The result is the software industry.

The real problem is in those very last words, software industry. That’s what we do, we’re an industry but we pretend to be, or at least expect, a field [of computer science]. Like Alan Kay says, computing isn’t really a field but a pop culture.

It’s not that email is broken or productivity tools all suck; it’s just that culture changes. People make email clients or to-do list apps in the same way that theater companies perform Shakespeare plays in modern dress. “Email” is our Hamlet. “To-do apps” are our Tempest.

Culture changes but mostly grows with the past, whereas pop culture takes almost nothing from the past and instead demands the present. Hamlet survives in our culture by being repeatedly performed, but more importantly it survives in our culture because it is studied as a work of art. The word “literacy” doesn’t just mean reading and writing, it also implies having a body of work included and studied by a culture.

Email and to-do apps aren’t cultural in this sense because they aren’t treated by anyone as “great works,” they aren’t revered or built-upon. They are regurgitated from one generation to the next without actually being studied and improved upon. Is it any wonder mail apps of yesterday look so much like those of today?

Wil Shipley on Automated Testing. Classic Shipley:

But, seriously, unit testing is teh suck. System testing is teh suck. Structured testing in general is, let’s sing it together, TEH SUCK.

“What?!!” you may ask, incredulously, even though you’re reading this on an LCD screen and it can’t possibly respond to you? “How can I possibly ship a bug-free program and thus make enough money to feed my tribe if I don’t test my shiznit?”

The answer is, you can’t. You should test. Test and test and test. But I’ve NEVER, EVER seen a structured test program that a) didn’t take like 100 man-hours of setup time, b) didn’t suck down a ton of engineering resources, and c) actually found any particularly relevant bugs. Unit testing is a great way to pay a bunch of engineers to be bored out of their minds and find not much of anything. [I know – one of my first jobs was writing unit test code for Lighthouse Design, for the now-president of Sun Microsystems.] You’d be MUCH, MUCH better offer hiring beta testers (or, better yet, offering bug bounties to the general public).

Let me be blunt: YOU NEED TO TEST YOUR DAMN PROGRAM. Run it. Use it. Try odd things. Whack keys. Add too many items. Paste in a 2MB text file. FIND OUT HOW IT FAILS. I’M YELLING BECAUSE THIS SHIT IS IMPORTANT.

Most programmers don’t know how to test their own stuff, and so when they approach testing they approach it using their programming minds: “Oh, if I just write a program to do the testing for me, it’ll save me tons of time and effort.”

There’s only three major flaws with this: (1) Essentially, to write a program that fully tests your program, you need to encapsulate all of your functionality in the test program, which means you’re writing ALL THE CODE you wrote for the original program plus some more test stuff, (2) YOUR PROGRAM IS NOT GOING TO BE USED BY OTHER PROGRAMS, it’s going to be used by people, and (3) It’s actually provably impossible to test your program with every conceivable type of input programmatically, but if you test by hand you can change the input in ways that you, the programmer, know might be prone to error.

Sing it.

Whither Cortex

In my NSNorth 2013 talk, An Educated Guess (which was recorded on video but as of yet has not been published) I gave a demonstration of a programming tool called Cortex and made the rookie mistake of saying it would be on Github “soon.” Seeing as July 2014 is well past “soon,” I thought I’d explain a bit about Cortex and what has happened since the first demonstration.

Cortex is a framework and environment for application programs to autonomously exchange objects without having to know about each other. This means, a Calendar application can ask the Cortex system for objects with a Calendar type and receive a collection of objects with dates. These Calendar objects come from other Cortex-aware applications on the system, like a Movies app, or a restaurant webpage, or a meeting scheduler. The Calendar application knows absolutely nothing about these other applications, all it knows is it wants Calendar objects.

Cortex can be thought of a little bit like Copy and Paste. With Copy and Paste, the user explicitly copies a selected object (like a selection of text, or an image from a drawing application) and then explicitly pastes what they’ve copied into another application (like an email application). In between the copy and paste is the Pasteboard. Cortex is a lot like the Pasteboard, except the user doesn’t explicitly copy or paste anything. Applications themselves either submit objects or request objects.

This, of course, results in quite a lot of objects in the system, so Cortex also has a method of weighing the objects by relevance so nobody is overwhelmed. Applications can also provide their own ways of massaging the objects in the system to create new objects (for example, a “Romantic Date” plugin might lookup objects of the Movie Showing type and the Restaurant type, and return objects of the Romantic Date type to inquiring applications).

If this sounds familiar, it’s because it was largely inspired by part of a Bret Victor paper with lots of other bits mixed in from my research for the NSNorth talk (especially Bush’s Memex and Engelbart’s NLS)).

This is the sort of system I’ve alluded to before on my website (for example, in Two Apps at the Same Time and The Programming Environment is the Programming Language, among others).

Although the system I demonstrated at NSNorth was largely a technical demo, it was nonetheless pretty fully featured and to my delight, was exceptionally well-received by those in the audience. For the rest of the conference, I was approached by many excited developers eager to jump in and get their feet wet. Even those who were skeptical were at least willing to acknowledge, despite its shortcomings, the basic premise of applications sharing data autonomously is a worthwhile endeavour.

And so here I am over a year later with Cortex still locked away in a private repository. I wish I could say I’ve been working on it all this time and it’s ten times more amazing than what I’d originally demoed but that’s just not true. Cortex is largely unchanged and untouched since its original showing last year.

On the plane ride home from NSNorth, I wrote out a to-do list of what needed to be done before I’d feel comfortable releasing Cortex into the wild:

  1. Writing a plugin architecture. The current plan is to have the plugins be normal cocoa plugins which will be run by an XPC process. That way if they crash they won’t bring down the main part of the system. This will mean the generation of objects is done asynchronously, so care will have to be taken here.

  2. A story for debugging Cortex plugins. It’s going to be really tricky to debug these things, and if it’s too hard then people just aren’t going to develop them. So it has to be easy to visualize what’s going on and easy to modify them. This might mean not using straight compiled bundles but instead using something more dynamic. I have to evaluate what that would mean for people distributing their own plugins, if this means they’d always have to be released in source form.

  3. How are Cortex plugins installed? The current thought is to allow for an install message to be sent over the normal cortex protocol (currently http) and either send the plugin that way (from a hosting application) or cause Cortex itself to then download and install the plugin from the web.

    How would it handle uninstalls? How would it handle malicious plugins? It seems like the user is going to have to grant permission for these things.

  4. Relatedly, should there be a permissions system for which apps can get/submit which objects for the system. Maybe we want to do just “read and or write” permissions per application.

The most important issue then, and today, is #2. How are you going to make a Cortex component (something that can create or massage objects) without losing your mind? Applications are hard to make, but they’re even harder to make when we can’t see our data. Since Cortex revolves around data, in order to make anything useful with it, programmers need to be able to see that data. Programmers are smart, but we’re also great at coping with things, with juuuust squeaking by with the smallest amount of functionality. A programmer will build-run-kill-change-repeat an application a thousand times before stopping and taking the time to write a tool to help visualize.

I do no want to promote this kind of development style with Cortex and until I can find a good solution (or be convinced otherwise) I don’t think Cortex would do anything but languish in public. If this sounds like an interesting problem to you, please do get in touch.

More Inspiration from Magic Ink and Cortex. Because I’ll never stop linking to Magic Ink:

The future will be context-sensitive. The future will not be interactive.

Are we preparing for this future? I look around, and see a generation of bright, inventive designers wasting their lives shoehorning obsolete interaction models onto crippled, impotent platforms. I see a generation of engineers wasting their lives mastering the carelessly-designed nuances of these dead-end platforms, and carelessly adding more. I see a generation of users wasting their lives pointing, clicking, dragging, typing, as gigahertz processors spin idly and gigabyte memories remember nothing. I see machines, machines, machines.

Apps are Websites

The Apple developer community is atwitter this week about independent developers and whether or not they can earn a good living working independently on the Mac and or iOS platforms. It’s a great discussion about an unfortunately bleak topic. It’s sad to hear so many great developers, working on so many great products, are doing so poorly from it. And it seems like a lot of it is mostly out of their control (if I thought I knew a better way, I’d be doing it!). David Smith summarizes most of the discussion (with an awesome list of links):

It has never been easy to make a living (whatever that might mean to you) in the App Store. When the Store was young it may have been somewhat more straightforward to try something and see if it would hit. But it was never “easy”. Most of my failed apps were launched in the first 3 years of the Store. As the Store has matured it has also become a much more efficient marketplace (in the economics sense of market). The little tips and tricks that I used to be able to use to gain an ‘unfair’ advantage now are few and far between.

The basic gist seems to be “it’s nearly impossible to make a living off iOS apps and it’s possible but still pretty hard to do off OS X.” Most of us I think would tend to agree you can charge more for OS X software than you can for iOS because OS X apps are usually “bigger” or more fleshed out, but I think that’s only half the story.

The real reason why it’s so hard to sell iOS apps is that iOS apps are really just websites. Implementation details aside, 95 per cent of people think of iOS apps the same way they think about websites. Websites that most people are exposed to are mostly promotional, ad-laden and most importantly, free. Most people do not pay for websites. A website is just something you visit and use, but it isn’t a piece of software, and this is the exact same way they think of and treat iOS apps. That’s why indie developers are having such a hard time making money.

(Just so we’re clear, I’ve been making iOS apps the whole duration of the App Store and I know damn well iOS apps are not “websites.” I’m well aware they are contained binaries that may-or-may-not use the internet or web services. I’m talking purely about perceptions here)

For a simple test, ask any of your non-developer friends what the difference between an “app” and an “application” or “program” is and I’d be willing to bet they think of them as distinct concepts. To most people, “apps” are only on your phone or tablet, and programs are bigger and on your computer. “Apps” seem to be a wholly different category of software from programs like Word or Photoshop, and the idea that Mac and iOS apps are basically the same on the inside doesn’t really occur to people (nor does it need to, really). People “know” apps aren’t the same thing as programs.

Apps aren’t really “used” so much as they are “checked” (how often do people “check Twitter” vs “use Twitter”?) which is usually a brief “visit” measured in seconds (of, ugh, “engagement”). Most apps are used briefly and fleetingly, just like most websites. iOS, then, isn’t so much an operating system but a browser and the App Store its crappy search engine. Your app is one of limitless other apps just like your website is one of limitless other websites too. The ones people have heard of are promoted and advertised, or the ones in their own niches.

I don’t know how to make money in the App Store, but if I had to I’d try to learn from financially successful websites. I’d charge a subscription and I’d provide value. I’d make an app that did something other than have a “feed” or a “stream” or “shared moments.” I’d make an app that help people create or understand. I’d try new things.

I couldn’t charge $50 for an “app” because apps are perceived as not having that kind of value which I have to agree with (I know firsthand how much works goes in to making an app, but that doesn’t make the app valuable), so maybe we need to create a new category of software on iOS, one that breaks out of the “app” shell (and maybe breaks out of the moniker, too). I don’t know what that entails, but I’m pretty sure that’s what we need.

The Colour and the Shape: Custom Fonts in System UI. Dave Wiskus:

When the user is in your app, you own the screen.

…Except for the status bar — that’s Helvetica Neue. And share sheets. And Alerts. And in action sheets. Oh, and in the swipe-the-cell UI in iOS 8. In fact any stock UI with text baked in is pretty much going to use Helvetica Neue in red, black, and blue. Hope you like it.

Maybe this is about consistency of experience. Perhaps Apple thinks that people with bad taste will use an unreadable custom font in a UIAlert and confuse users.

I agree with Dave the lack of total control is vexing, but I think that’s because with these system features, alert views and the status bar, Apple wants us to treat them more or less like hardware. They’re immutable, they “come from the OS” like they’re appearing on Official iOS Letterhead paper.

This is why I think Apple doesn’t want us customizing these aspects of iOS. They want to keep the “official” bits as untampered with as possible.

Idle Creativity. Andy Matuschak (in 2009):

When work piles up, my brain doesn’t have any idle cycles. It jumps directly from one task to another, so there’s no background processing. No creativity! And it feels like all the color and life has been sucked out of the world.

I don’t mind being stressed or doing lots of work or losing sleep, but I’ve been noticing that I’m a boring person when it happens!

JUMP Math: Multiplying Potential (PDF). John Mighton:

Research in cognitive science suggests that, while it is important to teach to the strengths of the brain (by allowing students to explore and discover concepts on their own), it is also important to take account of the weaknesses of the brain. Our brains are easily overwhelmed by too much new information, we have limited working memories, we need practice to consolidate skills and concepts, and we learn bigger concepts by first mastering smaller component concepts and skills.

Teachers are often criticized for low test scores and failing schools, but I believe that they are not primarily to blame for these problems. For decades teachers have been required to use textbooks and teaching materials that have not been evaluated in rigorous studies. As well, they have been encouraged to follow many practices that cognitive scientists have now shown are counterproductive. For example, teachers will often select textbooks that are dense with illustrations or concrete materials that have appealing features because they think these materials will make math more relevant or interesting to students. But psychologists such as Jennifer Kaminski have shown that the extraneous information and details in these teaching tools can actually impede learning.

(via @mayli)

Transliterature, a Humanist Design. Ted Nelson:

You have been taught to use Microsoft Word and the World Wide Web as if they were some sort of reality dictated by the universe, immutable “technology” requiring submission and obedience.

But technology, here as elsewhere, masks an ocean of possibilities frozen into a few systems of convention.

Inside the software, it’s all completely arbitrary. Such “technologies” as Email, Microsoft Windows and the World Wide Web were designed by people who thought those things were exactly what we needed. So-called “ICTs"– “Information and Communication Technologies,” like these– did not drop from the skies or the brow of Zeus. Pay attention to the man behind the curtain! Today’s electronic documents were explicitly designed according to technical traditions and tekkie mindset. People, not computers, are forcing hierarchy on us, and perhaps other properties you may not want.

Things could be very different.

Off and On

I started working on a side project in January 2014 and like many of my side projects over the years, after an initial few months of vigorous work, the last little while has been mostly off and on work on the project.

The typical list of explanations applies: work gets in the way (work has been a perpetual crunch mode for months now), the project has reached a big enough size that it’s hard to make changes (I’m on an unfamiliar platform), and I’m stuck at a particularly difficult problem (I saved the best for last!).

Since the summer has been more or less fruitless while working on this project, I’m taking a different approach going forward, one I’ve used to some success in the past. It comes down to three main things:

  1. Focus the work to one hour per day, usually in the morning. This causes me to get at least something done once every day, even if it’s just small or infrastructure work. I’ve found limiting to a small amount of time (two hours works well too) also forces me to not procrastinate or get distracted while I’m working. The hour of side project becomes precious and not something to waste.

  2. Stop working when you’re in the middle of something so you have somewhere to ramp up with next time you start (I’m pretty sure this one is cribbed directly from Ernest Hemingway).

  3. Keep a diary for your work. I do this with most of my projects by just updating a text file every day after I’m finished working with what my thoughts were for the day. I usually write about what I worked on and what I plan on working on for the next day. This complements step 2 because it lets me see where I left off and what I was planning on doing. It also helps bring any subconscious thoughts about the project into the front of my brain. I’ll usually spend the rest of the day thinking about it, and I’ll be eager to get started again the next day (which helps fuel step 1, because I have lots of ideas and want to stay focused on them — it forces me to work better to get them done).

That, and I’ve set a release date for myself, which should hopefully keep me focused, too.

Here goes.

The Wealth of Applications. Graham Lee:

Great, so dividing labour must be a good thing, right? That’s why a totally post-Smith industry like producing software has such specialisations as:

full-stack developer

Oh, this argument isn’t going the way I want. I was kindof hoping to show that software development, as much a product of Western economic systems as one could expect to find, was consistent with Western economic thinking on the division of labour. Instead, it looks like generalists are prized.

On market demand:

It’s not that there’s no demand, it’s that the demand is confused. People don’t know what could be demanded, and they don’t know what we’ll give them and whether it’ll meet their demand, and they don’t know even if it does whether it’ll be better or not. This comic strip demonstrates this situation, but tries to support the unreasonable position that the customer is at fault over this.

Just as using a library is a gamble for developers, so is paying for software a gamble for customers. You are hoping that paying for someone to think about the software will cost you less over some amount of time than paying someone to think about the problem that the software is supposed to solve.

But how much thinking is enough? You can’t buy software by the bushel or hogshead. You can buy machines by the ton, but they’re not valued by weight; they’re valued by what they do for you. So, let’s think about that. Where is the value of software? How do I prove that thinking about this is cheaper, or more efficient, than thinking about that? What is efficient thinking, anyway?

I think you can answer this question if you frame most modern software as “entertainment” (or at least, Apps are Websites). It’s certainly not the case that all software is entertainment, but perhaps for the vast majority of people software as they know it is much closer to movies and television than it is to references or mental tools. The only difference is, software has perhaps completed the ultimate wet dream of the entertainment market in that Pop Software doesn’t even really have personalities like music or TV do — the personality is solely that of the brand.

Jef Raskin on “Intuitive Interfaces”. Jef Raskin:

My subject was an intelligent, computer-literate, university-trained teacher visiting from Finland who had not seen a mouse or any advertising or literature about it. With the program running, I pointed to the mouse, said it was “a mouse”, and that one used it to operate the program. Her first act was to lift the mouse and move it about in the air. She discovered the ball on the bottom, held the mouse upside down, and proceeded to turn the ball. However, in this position the ball is not riding on the position pick-offs and it does nothing. After shaking it, and making a number of other attempts at finding a way to use it, she gave up and asked me how it worked. She had never seen anything where you moved the whole object rather than some part of it (like the joysticks she had previously used with computers): it was not intuitive. She also did not intuit that the large raised area on top was a button.

But once I pointed out that the cursor moved when the mouse was moved on the desk’s surface and that the raised area on top was a pressable button, she could immediately use the mouse without another word. The directional mapping of the mouse was “intuitive” because in this regard it operated just like joysticks (to say nothing of pencils) with which she was familiar.

From this and other observations, and a reluctance to accept paranormal claims without repeatable demonstrations thereof, it is clear that a user interface feature is “intuitive” insofar as it resembles or is identical to something the user has already learned. In short, “intuitive” in this context is an almost exact synonym of “familiar.”


The term “intuitive” is associated with approval when applied to an interface, but this association and the magazines’ rating systems raise the issue of the tension between improvement and familiarity. As an interface designer I am often asked to design a “better” interface to some product. Usually one can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and ease of implementation it is superior to competing or the client’s own products. Even where my proposals are seen as significant improvements, they are often rejected nonetheless on the grounds that they are not intuitive. It is a classic “catch 22.” The client wants something that is significantly superior to the competition. But if superior, it cannot be the same, so it must be different (typically the greater the improvement, the greater the difference). Therefore it cannot be intuitive, that is, familiar. What the client usually wants is an interface with at most marginal differences that, somehow, makes a major improvement. This can be achieved only on the rare occasions where the original interface has some major flaw that is remedied by a minor fix.

Nobody knew how to use an iPhone before they saw someone else do it. There’s nothing wrong with more powerful software that requires a user to learn something.

Documentation. Dr. Drang:

There seems to be belief among software developers nowadays that providing instructions indicates a failure of design. It isn’t. Providing instructions is a recognition that your users have different backgrounds and different ways of thinking. A feature that’s immediately obvious to User A may be puzzling to User B, and not because User B is an idiot.

You may not believe this, but when the Macintosh first came out everything about the user interface had to be explained.

Agreed. Of course you have to have a properly labeled interface, but that doesn’t mean you can’t have more powerful features explained in documentation. The idea that everything should be “intuitive” is highly toxic to creating powerful software.

Humans Need Not Apply. Here’s a light topic for the weekend:

This video combines two thoughts to reach an alarming conclusion: “Technology gets better, cheaper, and faster at a rate biology can’t match” + “Economics always wins” = “Automation is inevitable.”

This pairs well with the book I’m currently reading, Nick Bostrom’s Superintelligence and there’s an interesting discussion on reddit, too.

It’s important to remember even if this speculation is true and in the future humans are largely unemployable, that there are other things for a human to do than just work.


The bicycle is a surprisingly versatile metaphor and has been on my mind lately. Here are all the uses I could think of of the bicycle as a metaphor used when talking about computing.

Perhaps the most famous use is by Steve Jobs, who apparently wanted to rename the Macintosh to “Bicycle”. Steve explains why in this video:

I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.

And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.

On the other end of the metaphor spectrum, we have Doug Engelbart, who believed powerful tools required powerful training for users to realize their full potential, but that the world is more satisfied with dumbed down tools:

[H]ow do you ever migrate from a tricycle to a bicycle? A bicycle is very unnatural and hard to learn compared to a tricycle, and yet in society it has superseded all the tricycles for people over five years old. So the whole idea of high-performance knowledge work is yet to come up and be in the domain. It’s still the orientation of automating what you used to do instead of moving to a whole new domain in which you are obviously going to learn quite a few new skills.

And again from Belinda Barnet’s Memory Machines:

[Engelbart]: ‘Someone can just get on a tricycle and move around, or they can learn to ride a bicycle and have more options.’

This is Engelbart’s favourite analogy. Augmentation systems must be learnt, which can be difficult; there is resistance to learning new techniques, especially if they require changes to the human system. But the extra mobility we could gain from particular technical objects and techniques makes it worthwhile.

Finally, perhaps my two favourite analogies come from Alan Kay. Much like Engelbart, Alan uses the bicycle as a metaphor for learning:

I think that if somebody invented a bicycle now, they couldn’t get anybody to buy it because it would take more than five minutes to learn, and that is really pathetic.

But Alan has a more positive metaphor for the bicycle (3:50), which gives me some hope:

The great thing about a bike is that it doesn’t wither your physical attributes. It takes everything you’ve got, and it amplifies that! Whereas an automobile puts you in a position where you have to decide to exercise. We’re bad at that because nature never required us to have to decide to exercise. […]

So the idea was to try to make an amplifier, not a prosthetic. Put a prosthetic on a healthy limb and it withers.


Why don’t more people question things? What does it mean to question things? What kinds of things do we need to question? What kinds of answers do we hope to find from those questions? What sort of questions are we capable of answering? How do we answer the rest of the questions? Would it help if more people read books? Why does my generation, self included, insist of not reading books? Why do we insist on watching so much TV? Why do we insist on spending so much time on Twitter or Facebook? Why do I care so much how many likes or favs a picture or a post gets? What does it say about a society driven by that? Why are we so obsessed with amusing ourselves to death? Why are there so many damn photo sharing websites and todo applications? Is anybody even reading this? How do we make the world a better place? What does it mean to make the world a better place? Why do we think technology is the only way to accomplish this? Why are some people against technology? Do these people have good reasons for what they believe? Are we certain our reasons are better? Can we even know that for sure? What does it mean to know something for sure? Do computers cause more problems than they solve? Will the world be a better place if everyone learns to program? If we teach all the homeless people Javascript will they stop being homeless? What about the sexists and the racists and the fascists and the homophobes? Who else can help? How do we get all these people to work together? How do we teach them? How can we let people learn in better ways? How can we convince people to let go of their strategies adapted for the past and instead focus on the future? Why are there so many answers to the wrong questions?

It’s a Coup. Michael Tsai:

The quote in this post’s title, from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

The Future Programming Manifesto. Jonathan Edwards:

Most of the problems of software arise because it is too complicated for humans to handle. We believe that much of this complexity is unnecessary and indeed self-inflicted. We seek to radically simplify software and programming. […]

We should measure complexity as the cumulative cognitive effort to learn a technology from novice all the way to expert. One simple surrogate measure is the size of the documentation. This approach conflicts with the common tendency to consider only the efficiency of experts. Expert efficiency is hopelessly confounded by training and selection biases, and often justifies making it harder to become an expert. We are skeptical of “expressive power” and “terseness”, which are often code words for making things more mathematical and abstract. Abstraction is a two-edged sword.


I’ve had this thought stuck in my head for a few months about Beliefs I thought might be useful to share. The thought goes something like this:

A belief about something is scaffolding we should use until we’ve learned more truths about that something.

First, I should point out I don’t think this statement is necessarily entirely true (though it could be), but I do think it’s a useful starting point for a discussion. Second, I also don’t think this view on belief is widely practiced, but I do think it would make for more productive use of beliefs themselves.

We humans tend to be a very belief-based bunch. There are the obvious beliefs like religion and other similar deifications (“What would our forefathers think?”) but we hold strong beliefs all the time without even realizing it.

The public education systems in North America (as I experienced firsthand in Canada and as I’ve read about in America) are based on students believing and internalizing a finite set of “truths” (this is known as a curriculum) and taking precisely those beliefs as granted.

Science presents perhaps the best evidence we’re largely a belief-based species as science exists to seek truths our beliefs are not adapted to explaining. Before the invention of science, we relied on our beliefs to make sense of the world as best we could, but beliefs painted a blurry, monochromatic picture at best. Science is hard because it has to be hard—its job is to adapt parts of the universe which we can’t intuit into something we can place in our concept of reality—but it does a much superior job at explaining reality than our beliefs do.

A friend of mine recently told me “I have beliefs about the world just like everybody else…I just don’t trust them, is all” I think that’s a productive way to think about beliefs. It would probably be impossible to rid the world of belief, but I think a better approach is to acknowledge and understand belief as a useful, temporary tool. We should teach people to think about belief as a useful means to an end, as a support system, until more is learned about something. Most importantly, we should teach that beliefs should have a shelf-life, and not be permanently trusted.


After playing with Swift in my spare time for most of the Summer and after now using Swift full time at Hopscotch for about a month now, I thought I’d share some of my thoughts on the language.

The number one question I hear from iOS developers when Swift comes up is “Should you switch to Swift?” and my answer for that is “Probably yes.” It’s of course not a black and white answer and depends on your situation, but if you’re an experienced Objective C programmer, now is a great time to start working in Swift. I would suggest switching to it full time for all new iOS work (I wouldn’t recommend going back and re-writing your old Objective C code, but maybe replace bits and pieces of it as you see fit).

(If you’re a beginner to iOS development, see my thoughts on Swift for Beginners)

Idiomatic Swift

One reason I hear for developers wanting to hold off is “Swift is so new there aren’t really accepted idioms or best practices yet, so I’m going to wait a year or two for those to emerge.” I think that’s a fair argument, but I’d argue it’s better for you to jump in and invent them now instead of waiting for somebody else to do it.

I’m pretty sure when I look back on my Swift code a year from now I’ll cringe from embarrassment, but I’d rather be figuring it all out now, I’d rather be helping to establish what Good Swift looks like than just see what gets handed down. The conventions around the language are malleable right now because nothing has been established as Good Swift yet. It’s going to be a lot harder to influence Good Swift a year or two from now.

And the sooner you become productive in Swift, the sooner you’ll find areas where it can be improved. Like the young Swift conventions, Swift itself is a young language—the earlier in its life you file Radars and suggest improvements to the language, the more likely those improvements will be made. Three years from now Swift The Language is going to be a lot less likely to change compared to today. Your early radars today will have enormous effects on Swift in the future.

Swift Learning Curve

Another reason I hear about not wanting to learn Swift today is not wanting to take a major productivity hit while learning the language. From my experience, as an experienced iOS developer is you’ll be up to speed in a week or two with Swift, and then you’ll get all the benefits of Swift (even just not having header files or having to import files all over the place makes programming in Swift so much nicer than Objective C, you might not want to go back).

In that week or two when you’re a little slower at programming than you are with Objective C, you’ll still be pretty productive anyway. You certainly won’t become an expert in Swift (because nobody except maybe Chris Lattner is yet anyway!) right away, but you’ll be writing arguably cleaner code, and, you might even have some fun doing it.

I’ve been keeping a list of tips I’ve picked up while learning Swift so far, like how the compatibility with Objective C works, and how to do certain things (like weak capture lists for closures). If you have any suggestions of your own, feel free to send me a Pull Request.

Grab Bag of Various Caveats

  • I don’t fully understand initializers in Swift yet, but I kind of hate them. I get the theory behind them, that everything strictly must be initialized, but in practice this super sucks. This solves a problem I don’t think anybody really had.

  • Compile times for large projects suck. They’re really slow, because (I think) any change in any Swift file causes all the Swift files to be recompiled on build. My hunch is by breaking your project up into smaller Modules (aka Frameworks) this should relieve the slow build times. I haven’t tried this yet.

  • The Swift debugger feels pretty non-functional most of the time. I’m glad we have Playgrounds to test out algorithms and such, but unfortunately I’ve had to mainly resort to pooping out println()s of debug values.

  • What the hell is with the name println()? Would it have killed them to actually spell out the word printLine()? Do the language designers know about autocomplete?

  • The “implicitly unwrapped optional operator” (!) should really either be called the “subversion operator” or the “crash operator.” The major drumbeat we’ve been told about Swift is it’s supposed to not allow you to do unsafe things, hence (among other things) we have Optionals. By using the implicitly unwrapping the optional, we’re telling the compiler “I know better than you right now, so I’m just going to go ahead and subvert the rules and pretend this thing isn’t nil, because I know it’s not.” When you do this, you’re either going to be correct, in which case Swift was wrong for thinking a value was maybe going to be nil when it isn’t; or you’re going to be incorrect and cause your application to crash.

Objective Next

Earlier this year, before Swift was announced, I published an essay, Objective Next which discussed replacing Objective C, both in terms of the language itself and, more importantly, what we should thirst for in a successor:

We don’t need a better Objective C; we need a better way to make software. We do that in two steps: figure out what we’re trying to accomplish, and then figure out how to accomplish it. It’s simple, but nearly every post about replacing Objective C completely ignores these two steps.

In short, a replacement for Objective C that just offers a slimmed down syntax isn’t really a real victory at all. It’s simply a new-old thing. It’s a new way to accomplish the exact same kind of software. In a followup essay, I wrote:

If there was one underlying theme of the essay, it was “Don’t be trapped by Instrumental Thinking”, that particularly insidious kind of thinking that plagues us all (myself included) to thinking about new ideas or technologies only in terms of what we’re currently doing. That is, we often can only see or ask for a new tool to benefit exactly the same job we’re currently doing, where instead we should consider new kinds of things it might enable us to do. […]

When talking in terms of Objective C development, I don’t mean “I’m dreaming of a replacement that’ll just let you create the exact same identical apps, it’ll just have fewer warts,” but I instead mean I’m dreaming of a new, fundamental way to approach building software, that will result in apps richer in the things we care about, like visual and graphic design, usability and interaction, polish, and yes, offer enhancements to the undercarriage, too.

This, unfortunately, is exactly what we got with Swift. Swift is a better way to create the exact same kind of software we’ve been making with Objective C. It may crash a little less, but it’s still going to work exactly the same way. And in fact, because Swift is far more static than Objective C, we might even be a little bit more limited in terms of what we can do. For example, as quoted in a recent link:

The quote in this post’s title [“It’s a Coup”], from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

I still think Swift is a great language and you should use it, but I do find it lamentably not forward-thinking. The intentional lack of a garbage collector really sealed the deal for me. Swift isn’t a new language; it’s C++++. I am glad to get to program in it, and I think the more people using it today, the better it will be tomorrow.

Hopscotch and Mental Models

On May 8 2014, after many long months of work, we finally shipped Hopscotch 2.0, which was a major redesign of the app. Hopscotch is an interactive programming environment on the iPad for kids 8 and up, and while the dedicated learners used our 1.0 with great success, we wanted to make Hopscotch more accessible for more kids who may have otherwise struggled. Early on, I pushed for a rethinking of the mental model we wanted to present our programmers so they could better grasp the concept. While I pushed some of the core ideas, this was of course a complete team effort. Every. Single. Member. of our (admittedly small!) team contributed a great deal over many long discussions and long days building the app.

What follows is an examination of mental models, and the models used in various versions of Hopscotch.

Mental models

The human brain is 100000 year old hardware we’re relatively stuck with. Applications are software created by 100000 year old hardware. I don’t know which is scarier, but I do know it’s a lot easier to adapt the software than it is to adapt the hardware. A mental model is how you adapt your software to the human brain.

A mental model is a device (in the “literary device” sense of the word) you design for humans to use, knowingly or not, to better grasp concepts and accomplish goals with your software. Mental models work not by making the human “play computer” but by making the computer “play human,” thus giving the person a conceptual framework to think in while using the software.

The programming language Logo uses the mental model of the Turtle. When children program the Turtle to move in a circle, they teach it in terms of how they would move in a circle (“take a step, turn a bit, over and over until you make a whole circle”) (straight up just read Mindstorms).

There are varying degrees of success in a program’s mental model, usually correlating to the amount of thought the designers put into the model itself. A successful mental model results in the person having a strong connection with the software, where a weak mental model leaves people confused. The model of hierarchical file systems (e.g., “files and folders”) has long been a source of consternation for people because it forces them to think like a computer to locate information.

You may know your application’s mental model very intimately because you created it but most people will not be so fortunate when they start out. The easiest way to understand your application’s mental model is by having smaller leaps to make—for example, most iPhone apps are far more alike than they are different—so people they don’t have to tread too far into the unknown.

One of the more effective tricks we employ in graphical user interfaces is the spatial analogy. Views push and pop left and right on the screen, suggesting the application exists in a space extending beyond the bounds of the rectangle we stare at. Some applications offer a spatial analogy in terms of a zooming interface, like a Powers of Ten but for information (or “thought vectors in concept space”, to quote Engelbart) (see Je