Speed of Light
More Inspiration from Magic Ink and Cortex  

Because I’ll never stop linking to Magic Ink:

The future will be context-sensitive. The future will not be interactive.

Are we preparing for this future? I look around, and see a generation of bright, inventive designers wasting their lives shoehorning obsolete interaction models onto crippled, impotent platforms. I see a generation of engineers wasting their lives mastering the carelessly-designed nuances of these dead-end platforms, and carelessly adding more. I see a generation of users wasting their lives pointing, clicking, dragging, typing, as gigahertz processors spin idly and gigabyte memories remember nothing. I see machines, machines, machines.

Whither Cortex

In my NSNorth 2013 talk, An Educated Guess (which was recorded on video but as of yet has not been published) I gave a demonstration of a programming tool called Cortex and made the rookie mistake of saying it would be on Github “soon.” Seeing as July 2014 is well past “soon,” I thought I’d explain a bit about Cortex and what has happened since the first demonstration.

Cortex is a framework and environment for application programs to autonomously exchange objects without having to know about each other. This means, a Calendar application can ask the Cortex system for objects with a Calendar type and receive a collection of objects with dates. These Calendar objects come from other Cortex-aware applications on the system, like a Movies app, or a restaurant webpage, or a meeting scheduler. The Calendar application knows absolutely nothing about these other applications, all it knows is it wants Calendar objects.

Cortex can be thought of a little bit like Copy and Paste. With Copy and Paste, the user explicitly copies a selected object (like a selection of text, or an image from a drawing application) and then explicitly pastes what they’ve copied into another application (like an email application). In between the copy and paste is the Pasteboard. Cortex is a lot like the Pasteboard, except the user doesn’t explicitly copy or paste anything. Applications themselves either submit objects or request objects.

This, of course, results in quite a lot of objects in the system, so Cortex also has a method of weighing the objects by relevance so nobody is overwhelmed. Applications can also provide their own ways of massaging the objects in the system to create new objects (for example, a “Romantic Date” plugin might lookup objects of the Movie Showing type and the Restaurant type, and return objects of the Romantic Date type to inquiring applications).

If this sounds familiar, it’s because it was largely inspired by part of a Bret Victor paper with lots of other bits mixed in from my research for the NSNorth talk (especially Bush’s Memex and Engelbart’s NLS)).

This is the sort of system I’ve alluded to before on my website (for example, in Two Apps at the Same Time and The Programming Environment is the Programming Language, among others).

Although the system I demonstrated at NSNorth was largely a technical demo, it was nonetheless pretty fully featured and to my delight, was exceptionally well-received by those in the audience. For the rest of the conference, I was approached by many excited developers eager to jump in and get their feet wet. Even those who were skeptical were at least willing to acknowledge, despite its shortcomings, the basic premise of applications sharing data autonomously is a worthwhile endeavour.

And so here I am over a year later with Cortex still locked away in a private repository. I wish I could say I’ve been working on it all this time and it’s ten times more amazing than what I’d originally demoed but that’s just not true. Cortex is largely unchanged and untouched since its original showing last year.

On the plane ride home from NSNorth, I wrote out a to-do list of what needed to be done before I’d feel comfortable releasing Cortex into the wild:

  1. Writing a plugin architecture. The current plan is to have the plugins be normal cocoa plugins which will be run by an XPC process. That way if they crash they won’t bring down the main part of the system. This will mean the generation of objects is done asynchronously, so care will have to be taken here.

  2. A story for debugging Cortex plugins. It’s going to be really tricky to debug these things, and if it’s too hard then people just aren’t going to develop them. So it has to be easy to visualize what’s going on and easy to modify them. This might mean not using straight compiled bundles but instead using something more dynamic. I have to evaluate what that would mean for people distributing their own plugins, if this means they’d always have to be released in source form.

  3. How are Cortex plugins installed? The current thought is to allow for an install message to be sent over the normal cortex protocol (currently http) and either send the plugin that way (from a hosting application) or cause Cortex itself to then download and install the plugin from the web.

    How would it handle uninstalls? How would it handle malicious plugins? It seems like the user is going to have to grant permission for these things.

  4. Relatedly, should there be a permissions system for which apps can get/submit which objects for the system. Maybe we want to do just “read and or write” permissions per application.

The most important issue then, and today, is #2. How are you going to make a Cortex component (something that can create or massage objects) without losing your mind? Applications are hard to make, but they’re even harder to make when we can’t see our data. Since Cortex revolves around data, in order to make anything useful with it, programmers need to be able to see that data. Programmers are smart, but we’re also great at coping with things, with juuuust squeaking by with the smallest amount of functionality. A programmer will build-run-kill-change-repeat an application a thousand times before stopping and taking the time to write a tool to help visualize.

I do no want to promote this kind of development style with Cortex and until I can find a good solution (or be convinced otherwise) I don’t think Cortex would do anything but languish in public. If this sounds like an interesting problem to you, please do get in touch.

Wil Shipley on Automated Testing  

Classic Shipley:

But, seriously, unit testing is teh suck. System testing is teh suck. Structured testing in general is, let’s sing it together, TEH SUCK.

“What?!!” you may ask, incredulously, even though you’re reading this on an LCD screen and it can’t possibly respond to you? “How can I possibly ship a bug-free program and thus make enough money to feed my tribe if I don’t test my shiznit?”

The answer is, you can’t. You should test. Test and test and test. But I’ve NEVER, EVER seen a structured test program that a) didn’t take like 100 man-hours of setup time, b) didn’t suck down a ton of engineering resources, and c) actually found any particularly relevant bugs. Unit testing is a great way to pay a bunch of engineers to be bored out of their minds and find not much of anything. [I know – one of my first jobs was writing unit test code for Lighthouse Design, for the now-president of Sun Microsystems.] You’d be MUCH, MUCH better offer hiring beta testers (or, better yet, offering bug bounties to the general public).

Let me be blunt: YOU NEED TO TEST YOUR DAMN PROGRAM. Run it. Use it. Try odd things. Whack keys. Add too many items. Paste in a 2MB text file. FIND OUT HOW IT FAILS. I’M YELLING BECAUSE THIS SHIT IS IMPORTANT.

Most programmers don’t know how to test their own stuff, and so when they approach testing they approach it using their programming minds: “Oh, if I just write a program to do the testing for me, it’ll save me tons of time and effort.”

There’s only three major flaws with this: (1) Essentially, to write a program that fully tests your program, you need to encapsulate all of your functionality in the test program, which means you’re writing ALL THE CODE you wrote for the original program plus some more test stuff, (2) YOUR PROGRAM IS NOT GOING TO BE USED BY OTHER PROGRAMS, it’s going to be used by people, and (3) It’s actually provably impossible to test your program with every conceivable type of input programmatically, but if you test by hand you can change the input in ways that you, the programmer, know might be prone to error.

Sing it.

Doomed to Repeat It  

A mostly great article by Paul Ford about the recycling of ideas in our industry:

Did you ever notice, wrote my friend Finn Smith via chat, how often we (meaning programmers) reinvent the same applications? We came up with a quick list: Email, Todo lists, blogging tools, and others. Do you mind if I write this up for Medium?

I think the overall premise is good but I do have thoughts on some of it. First, he claims:

[…] Doug Engelbart’s NLS system of 1968, which pioneered a ton of things—collaborative software, hypertext, the mouse—but deep, deep down was a to-do list manager.

This is a gross misinterpretation of NLS and of Engelbart’s motivations. While the project did birth some “productivity” tools, it was much more a system for collaboration and about Augmenting Human Intellect. A computer scientist not understanding Engelbart’s work would be like a physicist not understanding Isaac Newton’s work.

On to-do lists, I think he gets closest to the real heart of what’s going on (emphasis mine):

The implications of a to-do list are very similar to the implications of software development. A task can be broken into a sequence, each of those items can be executed in turn. Maybe programmers love to do to-do lists because to-do lists are like programs.

I think this is exactly it. This is “the medium is the message” 101. Of course programmers are going to like sequential lists of instructions, it’s what they work in all day long! (Exercise for the reader: what part of a programmer’s job is like email?)

His conclusion is OK but I think misses the bigger cause:

Very little feels as good as organizing all of your latent tasks into a hierarchical lists with checkboxes associated. Doing the work, responding to the emails—these all suck. But organizing it is sweet anticipatory pleasure.

Working is hard, but thinking about working is pretty fun. The result is the software industry.

The real problem is in those very last words, software industry. That’s what we do, we’re an industry but we pretend to be, or at least expect, a field [of computer science]. Like Alan Kay says, computing isn’t really a field but a pop culture.

It’s not that email is broken or productivity tools all suck; it’s just that culture changes. People make email clients or to-do list apps in the same way that theater companies perform Shakespeare plays in modern dress. “Email” is our Hamlet. “To-do apps” are our Tempest.

Culture changes but mostly grows with the past, whereas pop culture takes almost nothing from the past and instead demands the present. Hamlet survives in our culture by being repeatedly performed, but more importantly it survives in our culture because it is studied as a work of art. The word “literacy” doesn’t just mean reading and writing, it also implies having a body of work included and studied by a culture.

Email and to-do apps aren’t cultural in this sense because they aren’t treated by anyone as “great works,” they aren’t revered or built-upon. They are regurgitated from one generation to the next without actually being studied and improved upon. Is it any wonder mail apps of yesterday look so much like those of today?

Step Away from the Kool-Aid  

Ben Howell on startups and compensation:

Don’t, under any circumstances work for less than market rate in order to build other peoples fortunes. Simply don’t do it. Cool product that excites you so in-turn you’ll work for a fraction of the market rate? Call that crap out for what it is. A CEO of a company asking you to help build his fortune while at the same time returning you squat.

String Constants  

Brent Simmons on string constants:

I know that using a string constant is the accepted best practice. And yet it still bugs me a little bit, since it’s an extra level of indirection when I’m writing and reading code. It’s harder to validate correctness when I have to look up each value — it’s easier when I can see with my eyes that the strings are correct.[…]

But I’m exceptional at spotting typos. And I almost never have cause to change the value of a key. (And if I did, it’s not like it’s difficult. Project search works.)

I’m not going to judge Brent here on his solution, but it seems to me like this problem would be much better solved by using string constants provided Xcode actually showed you the damn values of those constants in auto-complete.

When developers resort to crappy hacks like this, it’s a sign of a deficiency in the tools. If you find yourself doing something like this, you shouldn’t resort to tricks, you should say “I know a computer can do this for me” and you should demand it. (rdar://17668209)

Remote Chance

I recently stumbled across an interesting 2004 project called Glancing, whose basic principle is that of replicating the subtle social cues of personal, IRL office relationships like eye contact, nodding, etc. but for people using computers not in the same physical location.

The basic gist (as I understand it) is people, when in person, don’t merely start talking to one another but first have an initial conversation through body language. We glance at each other and try to make eye contact before actually speaking, hoping for the glance to be reciprocated. In this way, we can determine whether or not we should even proceed with the conversation at all, or if maybe the other person is occupied. Matt Webb’s Glancing exists as a way to bridge that gap with a computer (read through his slide notes, they’re detailed but aren’t long). You can look up at your screen and see who else has recently “looked up” too.

Remote work is a tricky problem to solve. We do it occasionally at Hopscotch when working from home, and we’re mostly successful at it, but as a friend of mine recently put it, it’s harder to have a sense of play when experimenting with new features. There is an element of collaboration, of jamming together (in the musical sense) that’s lacking when working over a computer.

Maybe there isn’t really a solution to it and we’re all looking at it the wrong way. Telecommuting has been a topic of research and experimentation for decades and it’s never fully taken off. It’s possible, like Neil Postman suggests in Technopoly that ours is a generation that can’t think of a solution to a problem outside of technology and that maybe this kind of collaboration isn’t compatible with technology. I see that as a possibility.

But I also think there’s a remote chance we’re trying to graft on collaboration as an after-the-fact feature to non-collaborative work environments. I work in Xcode and our designer works in Sketch, and when we collaborate, neither of our respective apps are really much involved. Both apps are designed with a single user in mind. Contrast this with Doug Engelbart and SRI’s NLS system, built from the ground up with multi-person collaboration in mind, and you’ll start to see what I mean.

NLS’s collaboration features seem, in today’s world at least, like screen sharing with multiple cursors. But it extends beyond that, because the whole system was designed to support multiple people using it from the get-go.

How do we define play, how do we jam remotely with software?

What Do We Save When We Save the Internet?  

Ian Bogost in a blistering look at today’s internet and Net Neutrality:

“We believe that a free and open Internet can bring about a better world,” write the authors of the Declaration of Internet Freedom. Its supporters rise up to decry the supposedly imminent demise of this Internet thanks to FCC policies poised to damage Network Neutrality, the notion of common carriage applied to data networks.

Its zealots paint digital Guernicas, lamenting any change in communication policy as atrocity. “If we all want to protect universal access to the communications networks that we all depend on to connect with ideas, information, and each other,” write the admins of Reddit in a blog post patriotically entitled Only YOU Can Protect Net Neutrality, “then we must stand up for our rights to connect and communicate.”

[…]

What is the Internet? As Evgeny Morozov argues, it may not exist except as a rhetorical gimmick. But if it does, it’s as much a thing we do as it is an infrastructure through which to do it. And that thing we do that is the Internet, it’s pockmarked with mortal regret:

You boot a browser and it loads the Yahoo! homepage because that’s what it’s done for fifteen years. You blink at it and type a search term into the Google search field in the chrome of the browser window instead.

Sitting in front of the television, you grasp your iPhone tight in your hand instead of your knitting or your whiskey or your rosary or your lover.

The shame of expecting an immediate reply to a text or a Gchat message after just having failed to provide one. The narcissism of urgency.

The pull-snap of a timeline update on a smartphone screen, the spin of its rotary gauge. The feeling of relief at the surge of new data—in Gmail, in Twitter, in Instagram, it doesn’t matter.

The gentle settling of disappointment that follows, like a down duvet sighing into the freshly made bed. This moment is just like the last, and the next.

You close Facebook and then open a new browser tab, in which you immediately navigate back to Facebook without thinking.

The web is a brittle place, corrupted by advertising and tracking (see also “Is the Web Really Free?”). I won’t spoil the ending but I’m at least willing to agree with his conclusion.

Seymour Papert: Situating Constructionism  

Seymour Papert and Idit Harel in an introduction to their book, discussing ways of approaching learning:

But the story I really want to tell is not about test scores. It is not even about the math/Logo class. (3) It is about the art room I used to pass on the way. For a while, I dropped in periodically to watch students working on soap sculptures and mused about ways in which this was not like a math class. In the math class students are generally given little problems which they solve or don’t solve pretty well on the fly. In this particular art class they were all carving soap, but what each students carved came from wherever fancy is bred and the project was not done and dropped but continued for many weeks. It allowed time to think, to dream, to gaze, to get a new idea and try it and drop it or persist, time to talk, to see other people’s work and their reaction to yours–not unlike mathematics as it is for the mathematician, but quite unlike math as it is in junior high school. I remember craving some of the students’ work and learning that their art teacher and their families had first choice. I was struck by an incongruous image of the teacher in a regular math class pining to own the products of his students’ work! An ambition was born: I want junior high school math class to be like that. I didn’t know exactly what “that” meant but I knew I wanted it. I didn’t even know what to call the idea. For a long time it existed in my head as “soap-sculpture math.”

It’s beginning to seem to me like constructionist learning is great, but also that we need many different approaches to learning, like atoms oscillating, so that the harmonics of learning can better emerge.

They were using this high-tech and actively computational material as an expressive medium; the content came from their imaginations as freely as what the others expressed in soap. But where a knife was used to shape the soap, mathematics was used here to shape the behavior of the snake and physics to figure out its structure. Fantasy and science and math were coming together, uneasily still, but pointing a way. LEGO/Logo is limited as a build-an-animal-kit; versions under development in our lab will have little computers to put inside the snake and perhaps linear activators which will be more like muscles in their mode of action. Some members of our group have other ideas: Rather than using a tiny computer, using even tinier logic gates and motors with gears may be fine. Well, we have to explore these routes (4). But what is important is the vision being pursued and the questions being asked. Which approach best melds science and fantasy? Which favors dreams and visions and sets off trains of good scientific and mathematical ideas?

I think the biggest problem still faced by Logo is (like Smalltalk) its success. Logo is highly revered as an educational language, so much so that its methods are generally accepted as “good enough” and not readily challenged. The unfortunate truth is twofold:

  1. In order for Logo to be successful as a general creative medium for learning, there are many other factors which must also be worked on, such as teacher/school acceptance (this is of course no easy feat and no fault of Logo’s designers, it’s just an unfortunate truth. Papert discusses it somewhat in The Children’s Machine).

  2. Logo just hasn’t taken the world by storm. Obviously these things take time, but the implicit assumption seems to be “Logo is done, now the world needs to catch up to it.”

“Good enough” tends to lead us down paths prematurely, when instead we should be pushing further. That’s why most programming languages look like Smalltalk and C. Those languages worked marvelously for their original goals, but they’re far from being the pinnacle of possibility. If Logo were invented today, what could it look like today (*future-referencing an ironic project of mine*)?

Computer-aided instruction may seem to refer to method rather than content, but what counts as a change in method depends on what one sees as the essential features of the existing methods. From my perspective, CAI amplifies the rote and authoritarian character that many critics see as manifestations of what is most characteristic of–and most wrong with–traditional school. Computer literacy and CAI, or indeed the use of word-processors, could conceivably set up waves that will change school, but in themselves they constitute very local innovations–fairly described as placing computers in a possibly improved but essentially unchanged school. The presence of computers begins to go beyond first impact when it alters the nature of the learning process; for example, if it shifts the balance between transfer of knowledge to students (whether via book, teacher, or tutorial program is essentially irrelevant) and the production of knowledge by students. It will have really gone beyond it if computers play a part in mediating a change in the criteria that govern what kinds of knowledge are valued in education.

This is perhaps the most damning and troublesome facet of computers for their use in pushing humans forward. Computers are so good at simulating old media that it’s essentially all we do with them. Doing old media is easy, as we don’t have to learn any new skills. We’ve evolved to go with the familiar, but I think it’s time we dip our toes into something a little beyond.

MIT Invents A Shapeshifting Display You Can Reach Through And Touch  

First just let me say the work done by the group is fantastic and a great step towards a dynamic physical medium, much like how graphical displays are dynamic visual media. This is an important problem.

What I find troubling, however, is the notion that this sort of technology should be used to mimic the wrong things:

But what really interests the Tangible Media Group is the transformable UIs of the future. As the world increasingly embraces touch screens, the pullable knobs, twisting dials, and pushable buttons that defined the interfaces of the past have become digital ghosts.

Buttons and knobs! Have we learned nothing from our time with dynamic visuals? Graphical buttons and other “controls” on a computer screen already act like some kind of steampunk interface. We’ve got buttons and sliders and knobs and levers, most of which are not appropriate from computer tasks but which we do because we’re stuck in a mechanical mindset. If we’re lucky enough to be blessed with a dynamic physical interface, why should we similarly squander it?

Hands are super sensitive and super expressive (read John Napier’s book about them and think about how you hold it as you read). They can form powerful or gentle grips and they can switch between them almost instantly. They can manipulate and sense pressure, texture, and temperature. They can write novels and play symphonies and make tacos. Why would we want our dynamic physical medium to focus on anything less?

(via @daveverwer)

I Tell You What I’d Do: Two Apps at the Same Time

Today’s iOS-related rumour is about iOS 8 having some kind of split screen functionality. From 9to5Mac:

In addition to allowing for two iPad apps to be used at the same time, the feature is designed to allow for apps to more easily interact, according to the sources. For example, a user may be able to drag content, such as text, video, or images, from one app to another. Apple is said to be developing capabilities for developers to be able to design their apps to interact with each other. This functionality may mean that Apple is finally ready to enable “XPC” support in iOS (or improved inter-app communication), which means that developers could design App Store apps that could share content.

Although I have no sources of my own, I wouldn’t bet against Mark Gurman for having good intel on this. It seems likely that this is real, but I think it might end up being a misunderstanding of problems users are actually trying to solve.

It’s pretty well-known most users have struggled with the “windowed-applications” interface paradigm, where there can be multiple, overlapping windows on screen at once. Many users get lost in the windows and end up devoting too much time to managing the windows than actually getting to work. So iOS is mostly a pretty great step forward in this regard. Having two “windows” of apps open at once would be a step back to the difficulties found on the desktop. And just because the windows on iOS 8 might not overlap, there’s still two different apps to multitask with — something else pretty well known to cause strife in people.

Having multiple windows seems like a kind of “faster horse,” a way to just repurpose the “old way” of doing something instead of trying to actually solve the problem users are having. In this case, the whole impetus for showing multiple windows or “dragging and dropping between apps” is to share information between applications.

Users writing an email might want details from a website, map, or restaurant app. Users trying to IM somebody might want to share something they’ve just seen or made in another app. Writers might want to refer to links or page contents from a Wikipedia app. These sorts of problems can all be solved by juxtaposing app windows side by side, but to me it seems like a cop-out.

A better solution would be to share the data between applications, through some kind of system service. Instead of drag and drop, or copy and paste (both are essentially the same thing), objects are implicitly shared across the system. If you are looking at a restaurant in one app, then switch to a maps app, that map should show the restaurant (along with any other object you’ve recently seen with a location). When you head to your calendar, it should show potential mealtimes (with the contact you’re emailing with, of course).

This sort of “interaction” requires thinking about the problem a little differently, but it’s advantageous because it ends up skipping most of the interaction users actually have to do in the first place. Users don’t need to drag and drop, they don’t need to copy and paste, and they don’t need to manage windows. They don’t need to be overloaded with information of seeing too many apps on screen at once.

I’ve previously talked about this, and my work on this problem is largely inspired by a section in Magic Ink. It’s sort of a “take an object; leave an object” kind of system, where applications can send objects to the system service, and others can request objects from the system (and of course, applications can provide feedback as to which objects should be shown and which should be ignored).

I don’t expect Apple to do this in iOS 8, but I do hope somebody will consider it.

Legible Mathematics  

Absolutely stunning and thought-provoking essay on a new interface for math as a method of experimenting with new interfaces for programming.

“Amusing Ourselves to Death”  

While I’m telling you what to do, I think everyone should read Neil Postman’s “Amusing Ourselves to Death.” From Wikipedia:

The essential premise of the book, which Postman extends to the rest of his argument(s), is that “form excludes the content,” that is, a particular medium can only sustain a particular level of ideas. Thus Rational argument, integral to print typography, is militated against by the medium of television for the aforesaid reason. Owing to this shortcoming, politics and religion are diluted, and “news of the day” becomes a packaged commodity. Television de-emphasises the quality of information in favour of satisfying the far-reaching needs of entertainment, by which information is encumbered and to which it is subordinate.

America was formed as, and made possible by, a literate society, a society of readers, when presidential debates took five hours. But television (and other electronic media) erode many of the modes in which we (i.e., the world, not just America) think.

If you work in media (and software developers, software is very much a medium) then you have a responsibility to read and understand this book. Your local library should have a copy, too.

The Shallows by Nicholas Carr  

Related to the previous post, I recently read Nicholas Carr’s “The Shallows” and I can’t recommend it enough. From the publisher:

As we enjoy the Net’s bounties, are we sacrificing our ability to read and think deeply?

Now, Carr expands his argument into the most compelling exploration of the Internet’s intellectual and cultural consequences yet published. As he describes how human thought has been shaped through the centuries by “tools of the mind”—from the alphabet to maps, to the printing press, the clock, and the computer—Carr interweaves a fascinating account of recent discoveries in neuroscience by such pioneers as Michael Merzenich and Eric Kandel. Our brains, the historical and scientific evidence reveals, change in response to our experiences. The technologies we use to find, store, and share information can literally reroute our neural pathways.

It’s a well-researched book about how the computers — and the internet in general — physically alter our brains and cause us to think differently. In this case, we think more shallowly because we’re continuously zipping around links and websites, and we can’t focus as well as we could when we were a more literate society. Deep reading goes out the browser window, as it were.

You should read it.

A Sheer Torment of Links  

Riccardo Mori:

In other words, people don’t seem to stay or at least willing to explore more when they arrive on a blog they probably never saw before. I’m surprised, and not because I’m so vain to think I’m that charismatic as to retain 90% of new visitors, but by the general lack of curiosity. I can understand that not all the people who followed MacStories’ link to my site had to like it or agree with me. What I don’t understand is the behaviour of who liked what they saw. Why not return, why not decide to keep an eye on my site?

I’ve thought a lot about this sort of thing basically the whole time I’ve been running Speed Of Light (just over four years now, FYI) and although I don’t consider myself to be any kind of great writer, I’ve always been a little surprised by the lack of traffic the site gets, even after some articles getting linked from major publications.

On any given day, a typical reader of my site will probably see a ton of links from Twitter, Facebook, an RSS feed, or a link site they read. Even if the content on any of those websites is amazing, a reader probably isn’t going to spend too much time hanging around, because there are forty or fifty other links for them to see today.

This is why nobody sticks around. This is why readers bounce. It’s why we have shorter, more superficial articles instead of deep essays. It’s why we have tl;dr. The torrent of links becomes a torment of links because we won’t and can’t stay on one thing for too long.

And it also poses moral issues for writers (or for me, at least). I know there’s a deluge, and every single thing I publish on this website contributes to that. But the catch is the way to get more avid readers these days is to publish copiously. The more you publish, the more people read, the more links you get, the more people become subscribers. What are we to do?

I don’t have a huge number of readers, but those who do read the site I respect tremendously. I’d rather have fewer, but more thoughtful readers who really care about what I write, than more readers who visit because I post frequent-but-lower-quality articles. I’d rather write long form, well-researched, thoughtful essays than entertaining posts. I know most won’t sit through more than three paragraphs but those aren’t the readers I’m after, anyway.

Much ado about the iPad  

Nice survey and commentary on the new ‘iPad is doomed’ meme. Riccardo Mori:

Again, here’s this urge to find the iPad some specific purpose, some thing it can do better than this device category or that other device category otherwise it’ll fade away.

If we want the iPad to be better at something, the answer is in the software, of course. Software truly optimised for the iPad. Software truly specialised for the iPad.

What I wonder is where are all the apps you spend at least one whole hour doing the same thing (other than “consuming” like you would in Safari, Netflix, or Twitter. I mean something real). Obviously I think Hopscotch is a candidate, but what else?

We need apps daring enough to be measured beyond “minutes of engagement” and we need developers daring enough to build them.

From Here You Can See Everything  

I’ve linked to this before, but I think it’s worth reading periodically. Particularly,

Almost every American I know does trade large portions of his life for entertainment, hour by weeknight hour, binge by Saturday binge, Facebook check by Facebook check. I’m one of them. In the course of writing this I’ve watched all 13 episodes of House of Cards and who knows how many more West Wing episodes, and I’ve spent any number of blurred hours falling down internet rabbit holes. All instead of reading, or writing, or working, or spending real time with people I love.

Smalltalk Didn’t Fail

Whenever anybody brings up the subject of creating software in a graphical environment, Smalltalk inevitably comes up. Since I’ve been publishing lots lately about such environments, I’ve been hearing lots of talk about Smalltalk, too. The most common response I hear is something along the lines of

You want a graphical environment? Well kid, we tried that with Smalltalk years ago and it failed, so it’s hopeless.

Outside of some select financial markets, Smalltalk is not used much for professional software development, but Smalltalk didn’t fail. In order to fail, a technology must attempt, but remain unsuccessful at achieving its goals. But when developers grunt that “Smalltalk failed”, they are saying, unaware of it themselves, that Smalltalk has failed for their goals. The goal of Smalltalk, as we’ll see, wasn’t really so much a goal as it was a vision, one that is still successfully being followed to this day.

There is a failure

But the failure is that of the software development community at large to do their research and to understand technologies through the lens of their creators instead trying to look at history in today’s terms.

The common gripes against Smalltalk are that it’s an image-based environment, which doesn’t abide well to source control management, and that these images are large and cumbersome for distribution and sharing. It’s true, a large image-based memory dump doesn’t work too well with Git, and on the whole Smalltalk doesn’t fit too well with our professional software development norms.

But it should be plain to anyone who’s done even the slightest amount of research on the topic that Smalltalk was never intended to be a professional software development environment. For a brief history, see Alan Kay’s Early History of Smalltalk, John Markoff’s What the Dormouse Said or Michael Hiltzik’s Dealers of Lightning. Although Xerox may have attempted to push Smalltalk as a professional tool after the release of Smalltalk 80, it’s obvious from the literature this was not the original intent of Smalltalk’s creators in its formative years during PARC.

A Personal Computer for Children of All Ages

The genesis of Smalltalk, its raison d’être, was to be the software running on Alan Kay’s Dynabook vision. In short, Alan saw the computer as a personal, dynamic medium for learning, creativity, and expression, and created the concept of the Dynabook to pursue that vision. He knew the ways the printing press and literacy revolutionized the modern world, and imagined what a book would be like if it had all the brilliance of a dynamic computer behind it.

Smalltalk was not then designed as a way for professional software development to take place, but instead as a general purpose environment in which “children of all ages” could think and reason in a dynamic environment. Smalltalk never started out with an explicit goal, but was instead a vehicle to what’s next on the way to the Dynabook vision.

In this regard, Smalltalk was quite successful. As a general professional development environment, Smalltalk is not the best, but as a language designed to create a dynamic medium of expression, Smalltalk was and is highly successful. See Alan give a demo of a modern, Smalltalk-based system for an idea how simple it is for a child to work with powerful and meaningful tools.

The Vehicle

Smalltalk and its descendants are far from perfect. They represent but one lineage of tools created with the Dynabook vision in mind, but they of course do not have to be the final say in expressive, dynamic media for a new generation. But whether you’re chasing that vision or just trying to understand Smalltalk as a development tool, it’s crucial to not look at it as how it fails at your job, but how your job isn’t what it’s trying to achieve in the first place.

Intuitive is the Enemy of Good  

Graham Lee:

This is nowhere more evident than in the world of the mobile app. Any one app comprises a very small number of very focussed, very easy to use features. This has a couple of different effects. One is that my phone as a whole is an incredibly broad, incredibly shallow experience.

I think Graham is very right here (and it’s not just limited to mobile, either, but it’s definitely most obvious there). It’s so hard to make software that actually, truly, does something useful for a person, to help them understand and make decisions, because we have to focus so much on the lowest common denominator.

We see those awesome numbers of how many iOS devices there are in the wild, and we think “If I could just get N% of those users, I’d have a tone of money” and it’s true, but it means you’ve also got to appeal to a huge population of users. You have to go broad instead of deep. The amount of time someone spends in your software is often measured in seconds. How do you do much of anything meaningful in seconds? 140 characters? Six seconds of video?

And with an audience so broad and an application so generic, you can’t expect to charge very much for it. This is why anything beyond $1.99 is unthinkable in the App Store (most users won’t pay anything at all).

How Much Programming Language is Enough?  

Graham Lee on the “size” of a programming language:

What would a programming tool suitable for experts (or the proficient) look like? Do we have any? Alan Kay is fond of saying that we’re stuck with novice-friendly user experiences, that don’t permit learning or acquiring expertise:

There is the desire of a consumer society to have no learning curves. This tends to result in very dumbed-down products that are easy to get started on, but are generally worthless and/or debilitating. We can contrast this with technologies that do have learning curves, but pay off well and allow users to become experts (for example, musical instruments, writing, bicycles, etc. and to a lesser extent automobiles).

Perhaps, while you could never argue that common programming languages don’t have learning curves, they are still “generally worthless and/or debilitating”. Perhaps it’s true that expertise at programming means expertise at jumping through the hoops presented by the programming language, not expertise at telling a computer how to solve a problem in the real world.

I wouldn’t argue that about programming languages. Aside from languages which are purposefully limited in scope or in target (Logo and Hopscotch come to mind), I think most programming languages aren’t tremendously different in terms of their syntax or capability.

Compare Scheme with Java. Although Java does have more syntax than Scheme, it’s not really that much more in the grand scheme (sorry) of things. Where languages really do differ in power is in libraries, but then that’s really just a story of “Who’s done the work, me or the vendor?”

I don’t think languages need the kitchen sink, but I do think languages need to be able to build the kitchen sink.

Assorted Followup, Explanation, And Afterthoughts Regarding ‘Objective Next’

On Monday, I published an essay exploring some thoughts about a replacement for Objective C, how to really suss out what I think would benefit software developers the most, and how we could go about implementing that. Gingerly though I pranced around certain details, and implore though I did for developers not to get caught up on certain details, alas many were snagged on some of the lesser important parts of the essay. So, I’d like to, briefly if I may, attempt to clear some of those up.

What We Make

If there was one underlying theme of the essay, it was “Don’t be trapped by Instrumental Thinking”, that particularly insidious kind of thinking that plagues us all (myself included) to thinking about new ideas or technologies only in terms of what we’re currently doing. That is, we often can only see or ask for a new tool to benefit exactly the same job we’re currently doing, where instead we should consider new kinds of things it might enable us to do.

Word processors are a prime example of this. When the personal computer revolution began, it was aided largely by the word processor — essentially a way to automatically typeset your document. The document — the content of what you produced — was otherwise identical, but the word processor made your job of typesetting much easier.

Spreadsheets, on the other hand, were something essentially brand new that emerged from the computer. Instead of just doing an old analog task, but better (as was the case with the word processor), spreadsheets allowed users to do something they just couldn’t do otherwise without the computer.

The important lesson of the spreadsheet, the one I’m trying to get at, is that it got to the heart of what people in business wanted to do: it was a truly new, flexible, and better way to approach data, like finances, sales, and other numbers. It wasn’t just paper with the kinks worked out, it wasn’t just a faster horse, it was a real, new thing that solved their problems in better ways.

When talking in terms of Objective C development, I don’t mean “I’m dreaming of a replacement that’ll just let you create the exact same identical apps, it’ll just have fewer warts,” but I instead mean I’m dreaming of a new, fundamental way to approach building software, that will result in apps richer in the things we care about, like visual and graphic design, usability and interaction, polish, and yes, offer enhancements to the undercarriage, too.

It’s far from being just about a pretty interface, it’s about rethinking what we’re even trying to accomplish. We’re trying to make software that’s understandable, that’s powerful, that’s useful, and that will benefit both our customers and ourselves. And while I think we might eventually get there if we keep trotting along as we’re currently doing, I think we’re also capable of leaping forward. All it takes is some imagination and maybe a pinch of willingness.

Graphical Programming

When “graphical programming” is brought up around programmers the lament is palpable. To most, graphical programming translates literally into “pretty boxes with lines connecting them” something akin to UML, where the “graphical” part of programming is actually just a way for the graphics to represent code (but please do see GRAIL or here, a graphical diagramming tool designed in the late 1960s which still spanks the pants off most graphical coding tools today). This is not what I consider graphically programming to be. This is, at best, graphical coding, to which I palpably lament in agreement.

When I mention “graphical programming” I mean creating a graphical program (e.g., a view with colours and text) in a graphical way, like drawing out rectangles, lines, and text as you might do in Illustrator (see this by Bret Victor (I know, didn’t expect me to link to him right?) for probably my inspiration for this). When most people hear graphical programming, they think drawing abstract boxes (that probably generate code, yikes), but what I’m talking about is drawing the actual interface, as concretely as possible (and then abstracting the interface for new data).

There are loads of crappy attempts at the former, and very few attempts at all at the latter. There’s a whole world waiting to be attempted.

Interface Builder

Interface Builder is such an attempt at drawing out your actual, honest to science, interface in a graphical way, and it’s been moderately successful, but I think the tool falls tremendously short. Your interfaces unfortunately end up conceptually the same as mockups (“How do I see this with my real application data? How do I interact with it? How come I can only change certain view properties, but not others, without resorting to code?”). These deficiencies arise because IB is a graphical tool in a code-first town. Although it abides, IB is a second-class citizen so far as development tools go. Checkboxes for interface features get added essentially at Apple’s whim.

What we need is a tool where designing an interface means everything interface related must be a first-class citizen.

Compiled vs Interpreted

Oh my goodness do I really have to go there? After bleating for so many paragraphs about considering thinking beyond precisely what must be worked on right-here-and-now, so many get caught up on the Compiled-v-Interpreted part.

Just to be clear, I understand the following (and certainly, much more):

  1. Compiled applications execute faster than interpreted ones.
  2. Depending on the size of the runtime or VM, an interpreted language consumes more memory and energy than a compiled language.
  3. Some languages (like Java) are actually compiled to a kind of bytecode, which is then executed in a VM (fun fact: I used to work on the JVM at IBM as a co-op student).

All that being said, I stand by my original assertion that for the vast majority of the kinds of software most of us in the Objective C developer community build, the differences between the two styles of language in terms of performance are negligible, not in terms of absolute difference, but in terms of what’s perceptible to users. And that will only improve over time, as phones, tablets, and desktop computers all amaze our future selves by how handily they run circles around what our today selves even imagined possible.

If I leave you with nothing else, please just humour me about all this.

Objective Next

A few weeks ago, I posted my lengthy thoughts about Objective C, what’s wrong with it, and what I think will happen in the future with it:

Apple is a product-driven company, not a computing-driven company. While there are certainly many employees interested in the future of computing, Apple isn’t the company to drive [replacing it]. It’s hard to convince such a highly successful product company to operate otherwise.

So if a successor language is to emerge, it’s got to come from elsewhere. […] I’m not convinced a better Objective C, some kind of Objective Next is the right way to go. A new old thing is not really what we need. It seems absurd that 30 years after the Mac we still build the same applications the same ways. It seems absurd we still haven’t really caught up to Smalltalk. It seems absurd beautiful graphical applications are created solely and sorely in textual, coded languages. And it seems absurd to rely on one vendor to do something about it.

There has been lots of talk in the weeks since I posted my article criticizing Objective C, including a post by my friend Ash Furrow, Steve Streza, Guy English, and Brent Simmons. Much of their criticisms are similar or ring true to mine, but the suggestions for fixing the ills of Objective C almost all miss the point:

We don’t need a better Objective C; we need a better way to make software. We do that in two steps: figure out what we’re trying to accomplish, and then figure out how to accomplish it. It’s simple, but nearly every post about replacing Objective C completely ignores these two steps.

I work on programming languages professionally at Hopscotch, which I mention not so I can brag about it but so I can explain this is a subject I care deeply about, something I work on every day. This isn’t just a cursory glance because I’ve had some grumbles with the language. This essay is my way of critically examining and exploring possibilities for future development environments we can all benefit from. That requires stepping a bit beyond what most Objective C developers seem willing to consider, but it’s important nonetheless.

Figure out what we’re trying to make

We think we know what we want from an Objective C replacement. We want Garbage Collection, we want better concurrency, we don’t want pointers. We want better networking, better data base, better syncing functionality. We want a better, real runtime. We don’t want header files or import statements.

These are all really very nice and good, but they’re actually putting the CPU before the horse. If you ask most developers why they want any of those things, they’ll likely tell you it’s because those are the rough spots of Objective C as it exists today. But they’ll say nothing of what they’re actually trying to accomplish with the language (hat tip to Guy English though for being the exception here).

This kind of thinking is what Alan Kay refers to as “Instrumental thinking”, where you only think of new inventions in terms of how they can allow you to do your same precise job in a better way. Personal computing software has fallen victim to instrumental thinking routinely since its inception. A word processor’s sole function is to help you layout your page better, but it does nothing to help your writing (orthography is a technicality).

Such is the same with the thinking that goes around with replacing Objective C. Almost all the wishlists for replacements simply ask for wrinkles to be ironed out.

If you’re wondering what such a sandpapered Objective Next might look like, I’ll point you to one I drew up in early 2013 (while I too was stuck in the instrumental thinking trap, I’ll admit).

It’s easy to get excited about the (non-existing) language if you’re an Objective C programmer, but I’m imploring the Objective C community to try and think beyond a “new old-thing”, try to actually think of something that solves the bigger pictures.

When thinking about what could really replace Objective C, then, it’s crucial to clear your mind of the minutia and dirt involved in how you program today, and try to think exclusively of what you’re trying to accomplish.

For most Objective C developers, we’re trying to make high quality software, that looks and feels great to use. We’re looking to have a tremendous amount of delight and polish to our products. And hopefully, most importantly, we’re trying to build software to significantly improve the lives of people.

That’s what we want to do. That’s what we want to do better. The problem isn’t about whether or not our programming language has garbage collection, the problem is whether or not we can build higher quality software in a new environment than we could with Objective C’s code-wait-run cycle.

In the Objective C community, “high quality software” usually translates to visually beautiful and fantastically useable interfaces. We care a tremendous amount about how our applications are presented and understood by our users, and this kind of quality takes a mountain of effort to accomplish. Our software is usually developed by a team of programmers and a team of designers, working in syncopation to deliver on the high quality standards we’ve set for ourselves. More often than not, the programers become the bottleneck, if only because every other part of the development team must ultimately have their work funnelled through code at some point. This causes long feedback loops in development time, and if it’s frustrating to make and compare improvements to the design, it is often forgone altogether.

This strain trickles down to the rest of the development process. If it’s difficult to experiment, then it’s difficult to imagine new possibilities for what your software could do. This, in part, reinforces our instrumental thinking, because it’s usually just too painful to try and think outside the box. We’d never be able to validate our outside box thinking even if we wanted to! And thus, this too strains our ability to build software that significantly enhances the lives of our customers and users.

With whatever Objective C replacement there may be, whether we demand it or we build it ourselves, isn’t it best to think not how to improve Objective C but instead how make the interaction between programmer and designer more cohesive? Or how to shift some of the power (and responsibility) of the computer out of the hands of the programmer and into the arms of the designer?

Something as simple as changing the colour or size of text should not be the job of the programmer, not because the programmer is lazy (which is mostly certainly probably true anyway) but because these are issues of graphic design, of presentation, which the designer is surely better trained and more equipped to handle. Yet this sort of operation is almost agonizing from a development perspective. It’s not that making these changes are hard, but that it often requires the programmer to switch tasks, when and only when there is time, and then present changes to the designer for approval. This is one loose feedback loop and there’s no real good reason why it has to be this way. It might work out pretty well the other way.

Can you think of any companies where design is paramount?

When you’re thinking of a replacement for Objective C, remember to think of why you want it in the first place. Think about how we can make software in better ways. Think about how your designers can improve your software if they had more access to it, or how you could improve things if you could only see them.

This is not just about a more graphical environment, and it’s not just about designers playing a bigger role. It’s about trying to seek out what makes software great, and how our development tools could enable software to be better.

How do we do it?

If we’re going to build a replacement programming environment for Objective C, what’s it going to be made of? Most compiled languages can be built with LLVM quite handily these days—

STOP

We’ve absolutely got to stop and check more of our assumptions first. Why do we assume we need a compiled language? Why not an interpreted language? Objective C developers are pretty used to this concept, and most developers will assert compiled languages are faster than interpreted or virtual machine language (“just look at how slow Android is, this is because it runs Java and Java runs in a VM,” we say). It’s true that compiled apps are almost always going to be faster than interpreted apps, but the difference isn’t substantial enough to close the door on them so firmly, so quickly. Remember, today’s iPhones are just-as-fast-if-not-faster than a pro desktop computer ten years ago, and those ran interpreted apps just fine. While you may be able to point at stats and show me that compiled apps are faster, in practice the differences are often negligible, especially with smart programmers doing the implementation. So lets keep the door open on interpreted languages.

Whether compiled or interpreted, if you’re going to make a programming language then you definitely need to define a grammar, work on a parser, and—

STOP

Again, we’ve got to stop and check another assumption. Why make the assumption that our programming environment of the future must be textual? Lines of pure code, stored in pure text files, compiled or interpreted, it makes little difference. Is that the future we wish to inherit?

We presume code whenever we think programming, probably because it’s all most of us are ever exposed to. We don’t even consider the possibility that we could create programs without typing in code. But with all the abundance and diversity of software, both graphical and not, should it really seem so strange that software itself might be created in a medium other than code?

“Ah, but we’ve tried that and it sucked,” you’ll say. For every sucky coded programming language, there’s probably a sucky graphical programming language too. “We’ve tried UML and we’ve tried Smalltalk,” you’ll say, and I’ll say “Yes we did, 40 years of research and a hundred orders of magnitude ago, we tried, and the programming community at large decided it was a capital Bad Idea.” But as much as times change, computers change more. We live in an era of unprecedented computing power, with rich (for a computer) displays, ubiquitous high speed networking, and decades of research.

For some recent examples of graphical programming environments that actually work, see Bret Victor’s Stop Drawing Dead Fish and Drawing Dynamic Visualizations talks, or Toby Schachman’s (of Recursive Drawing fame) excellent talk on opening up programming to new demographics by going visual. I’m not saying any one of these tools, as is, are a replacement for Objective C, but I am saying these tools demonstrate what’s possible when we open our eyes, if only the tiniest smidge, and try to see what software development might look like beyond coded programming languages.

And of course, just because we should seriously consider non-code-centric languages doesn’t mean that everything must be graphical either. There are of course concepts we can represent linguistically which we can’t map or model graphically, so to completely eschew a linguistic interface to program creation would be just as absurd as completely eschewing a graphical interface to program creation in a coded language.

The benefits for even the linguistic parts of a graphical programming environment are plentiful. Consider the rich typographic language we forego when we code in plain text files. We lose the benefits of type choices, of font sizes and weight, hierarchical groupings. Even without any pictures, think how much more typographically rich a newspaper is compared to a plain text program file. In code, we’re relegated to fixed-width, same size and weight fonts. We’re lucky if we get any semblance of context from syntax highlighting, and it’s often a battle to impel programmers to use whitespace to create ersatz typographic hierarchies in code. Without strong typography, nothing looks any more important than anything else as you gawk at a source code file. Experienced code programmers can see what’s important, but they’re not seeing it with their eyes. Why should we, the advanced users of advanced computers, be working in a medium that’s less visually rich than even the first movable type printed books, five centuries old?

And that’s to say nothing of other graphical elements. Would you like to see some of my favourite colours? Here are three: #fed82f, #37deff, #fc222f. Aren’t they lovely? The computer knows how to render those colours better than we know how to read hex, so why doesn’t it do that? Why don’t we demand this of our programming environment?

Objective: Next

If we’re ever going to get a programming environment of the future, we should make sure we get one that’s right for us and our users. We should make sure we’re looking far down the road, not at our shoes. We shouldn’t try to build a faster horse, but we should instead look where we’re really trying to go and then find the best way to get there.

We also shouldn’t rely on one company to get us there. There’s still plenty to be discovered by plenty of people. If you’d like to help me discover it, I’d love to hear from you.

Read Speed of Light in Chronological Order

One of the features I wrote down when designing Speed of Light in 2010 was to be able to read the website in chronological order, the order in which it was originally written.

Now you can.

I wish every website had this feature.

NSNorth 2014  

I can’t believe I haven’t yet written about it, but Ottawa’s very own NSNorth is back this year and it’s looking to be better than ever (that’s saying a lot, considering I spoke at the last one!).

This year’s speaker lineup looks great, including Mattt Thompson, Jessie Char, and Don Melton.

Last year was a total blast, and Ottawa is probably Canada’s second prettiest city. You won’t regret it.

Code Reading  

Peter Seibel on code as literature:

But then it hit me. Code is not literature and we are not readers. Rather, interesting pieces of code are specimens and we are naturalists. So instead of trying to pick out a piece of code and reading it and then discussing it like a bunch of Comp Lit. grad students, I think a better model is for one of us to play the role of a 19th century naturalist returning from a trip to some exotic island to present to the local scientific society a discussion of the crazy beetles they found: “Look at the antenna on this monster! They look incredibly ungainly but the male of the species can use these to kill small frogs in whose carcass the females lay their eggs.”

I think it’s true that code is not literature, but I also think it’s kind of a bum steer to approach code like science. We investigate things in science because we have to. Nature has created the world a certain way, and there’s no way to make it understandable but to investigate it.

But code isn’t a natural phenomenon, it’s something made by people, and as such we have the opportunity (and responsibility) to make it accessible without investigation.

If we need to decode something, something that we ourselves make, I think that’s a sign we shouldn’t be encoding it in the first place.

(via Ryan McCuiag)

Authors of ‘The Federalist Papers’ request Facebook rename ‘Paper’

In a letter received by Speed of Light postmarked February 3rd, 2014, the authors of The Federalist Papers contend Facebook’s latest iPhone app, Paper, should be renamed. The authors, appearing under the pseudonym Publius, write:

It has been frequently remarked, that it seems to have been reserved to the creators of Facebook, by their conduct and example, they might choose to appropriate the name Paper for their own devices. We would like to see that changed.

The authors, predicting the counter-argument that the name “paper” is a common noun, write:

Every story has a name. Despite the fact the word “paper” is indeed a generic term, and despite the fact the original name of our work was simply The Federalist (Papers was later appended by somebody else), we nonetheless feel because our work was published first, we are entitled to the name Paper. The Federalist Papers have been circulating for more than two centuries, so clearly, we have a right to the name.

The polemic towards Facebook seems to be impelled by Facebook’s specific choice of title and location:

It is especially insulting since Facebook has chosen to launch Paper exclusively in America, where each of its citizens is well aware and well versed in the materials of The Federalist Papers. It is as though they believe citizens will be unaware of the source material from which Facebook Paper is inspired. This nation’s citizens are active participants in the nation’s affairs, and this move by Facebook is offensive to the very concept.

Publius provide a simple solution:

We believe it is the right of every citizen of this nation to have creative freedoms and that’s why we kindly ask Facebook to be creative and not use our name.

Objective C is a Bad Language But Not For The Reasons You Think It Is: Probably; Unless You’ve Programmed With it for a While In Which Case You Probably Know Enough To Judge For Yourself Anyway: The Jason Brennan Rant

When most programmers are introduced to Objective C for the first time, they often recoil in some degree of disgust at its syntax: “Square brackets? Colons in statements? Yuck” (this is a close approximation of my first dances with the language). Objective C is a little different, a little other, so naturally programmers are loathe to like it at first glance.

I think that reaction, that claiming Objective C is bad because of its syntax results from two factors which coincided: 1. The syntax looks very alien; 2. Most learned it because they wanted to jump in on iOS development, and Apple more or less said “It’s this syntax or the highway, bub” which put perceptions precariously between a rock and a hard place. Not only did the language taste bad, developers were also forced to eat it regardless.

But any developer with even a modicum of sense will eventually realize the Objective part of Objective C’s syntax is among its greatest assets. Borrowed largely (or shamelessly copied) from Smalltalk’s message sending syntax by Brad Cox (see Object Oriented Programming: An Evolutionary Approach for a detailed look at the design decisions behind Objective C), Objective C’s message sending syntax has some strong benefits over traditional object.method() method calling syntax. It allows for later binding of messages to objects, and perhaps most practically, reading code reads like a sentence, with parameters prefaced with their purpose in the message.

Objective C’s objects are pretty flexible when compared to similar languages like C++ (compare relative fun of extending and overriding parts of classes in Objective C vs C++), and can be extended at runtime via Categories or through runtime functions (more on those soon) itself, but Objective C’s objects pale in comparison to those of a Smalltalk-like environment, where objects are always live and browsable. Though objects can be extended at runtime, they seldom are, and are instead almost exclusively fully built by compile time (that is to say, yes lots of objects are allocated during the runtime of an application, and yes some methods are added via Categories which are loaded in at runtime, but rarely are whole classes added to the application at runtime).

This compiled nature, along with the runtime functions point to the real crux of what’s wrong with Objective C: the language is still feels quite tacked on to C. The language was in fact originally built as a preprocessor to C (c.f.: Cox), but over the years has been built up a little sturdier, all the while remaining still atop C. It’s a superset of C, so all C code is considered Objective C code, which includes:

  • Header files
  • The preprocessor
  • Functions
  • Manual Memory Management (ARC automates this, but it’s still “automatic-manual”)
  • Header files
  • Classes are fancy structs (now C++ structs according to Landon Fuller)
  • Methods are fancy function pointers
  • Header files
  • All of this is jury-rigged by runtime C (and/or C++) functions

In addition, Objective C has its own proper warts, including a lack of method visibility methods (like protected, private, partytime, and public), lacks class namespacing (although curiously protocols exist in their own namespace), require method declarations for public methods, lacks a proper importing system (yes, I’m aware of @import), suffers from C’s support of library linking because it lacks its own, has header files, has a weak abstraction for dynamically sending messages (selectors are not much more than strings), must be compiled and re-run to see changes, etc.

John Siracusa has talked at length about what kinds of problems a problemmed language like Objective C can cause in his Copland 2010 essays. In short, Objective C is a liability.

I don’t see Apple dramatically improving or replacing Objective C anytime soon. It doesn’t seem to be in their technical culture, which still largely revolves around C. Apple has routinely added language features (e.g., Blocks, modules) and libraries (e.g., libdispatch) at the C level. Revolving around a language like C makes sense when you consider Apple’s performance-driven culture (“it scrolls like butter because it has to”). iOS, Grand Central Dispatch, Core Animation, WebKit are all written at the C or C++ level where runtime costs are near-non-extant, where higher level concepts like a true object system can’t belong, due to the performance goals the company is ruled by.

Apple is a product-driven company, not a computing-driven company. While there are certainly many employees interested in the future of computing, Apple isn’t the company to drive it. It’s hard to convince such a highly successful product company to operate otherwise.

So if a successor language is to emerge, it’s got to come from elsewhere. I work on programming languages professionally at Hopscotch, but I’m not convinced a better Objective C, some kind of Objective Next is the right way to go. A new old thing is not really what we need. It seems absurd that 30 years after the Mac we still build the same applications the same ways. It seems absurd we still haven’t really caught up to Smalltalk. It seems absurd beautiful graphical applications are created solely and sorely in textual, coded languages. And it seems absurd to rely on one vendor to do something about it.