Speed of Light
Intuitive is the Enemy of Good  

Graham Lee:

This is nowhere more evident than in the world of the mobile app. Any one app comprises a very small number of very focussed, very easy to use features. This has a couple of different effects. One is that my phone as a whole is an incredibly broad, incredibly shallow experience.

I think Graham is very right here (and it’s not just limited to mobile, either, but it’s definitely most obvious there). It’s so hard to make software that actually, truly, does something useful for a person, to help them understand and make decisions, because we have to focus so much on the lowest common denominator.

We see those awesome numbers of how many iOS devices there are in the wild, and we think “If I could just get N% of those users, I’d have a tone of money” and it’s true, but it means you’ve also got to appeal to a huge population of users. You have to go broad instead of deep. The amount of time someone spends in your software is often measured in seconds. How do you do much of anything meaningful in seconds? 140 characters? Six seconds of video?

And with an audience so broad and an application so generic, you can’t expect to charge very much for it. This is why anything beyond $1.99 is unthinkable in the App Store (most users won’t pay anything at all).

How Much Programming Language is Enough?  

Graham Lee on the “size” of a programming language:

What would a programming tool suitable for experts (or the proficient) look like? Do we have any? Alan Kay is fond of saying that we’re stuck with novice-friendly user experiences, that don’t permit learning or acquiring expertise:

There is the desire of a consumer society to have no learning curves. This tends to result in very dumbed-down products that are easy to get started on, but are generally worthless and/or debilitating. We can contrast this with technologies that do have learning curves, but pay off well and allow users to become experts (for example, musical instruments, writing, bicycles, etc. and to a lesser extent automobiles).

Perhaps, while you could never argue that common programming languages don’t have learning curves, they are still “generally worthless and/or debilitating”. Perhaps it’s true that expertise at programming means expertise at jumping through the hoops presented by the programming language, not expertise at telling a computer how to solve a problem in the real world.

I wouldn’t argue that about programming languages. Aside from languages which are purposefully limited in scope or in target (Logo and Hopscotch come to mind), I think most programming languages aren’t tremendously different in terms of their syntax or capability.

Compare Scheme with Java. Although Java does have more syntax than Scheme, it’s not really that much more in the grand scheme (sorry) of things. Where languages really do differ in power is in libraries, but then that’s really just a story of “Who’s done the work, me or the vendor?”

I don’t think languages need the kitchen sink, but I do think languages need to be able to build the kitchen sink.

Assorted Followup, Explanation, And Afterthoughts Regarding ‘Objective Next’

On Monday, I published an essay exploring some thoughts about a replacement for Objective C, how to really suss out what I think would benefit software developers the most, and how we could go about implementing that. Gingerly though I pranced around certain details, and implore though I did for developers not to get caught up on certain details, alas many were snagged on some of the lesser important parts of the essay. So, I’d like to, briefly if I may, attempt to clear some of those up.

What We Make

If there was one underlying theme of the essay, it was “Don’t be trapped by Instrumental Thinking”, that particularly insidious kind of thinking that plagues us all (myself included) to thinking about new ideas or technologies only in terms of what we’re currently doing. That is, we often can only see or ask for a new tool to benefit exactly the same job we’re currently doing, where instead we should consider new kinds of things it might enable us to do.

Word processors are a prime example of this. When the personal computer revolution began, it was aided largely by the word processor — essentially a way to automatically typeset your document. The document — the content of what you produced — was otherwise identical, but the word processor made your job of typesetting much easier.

Spreadsheets, on the other hand, were something essentially brand new that emerged from the computer. Instead of just doing an old analog task, but better (as was the case with the word processor), spreadsheets allowed users to do something they just couldn’t do otherwise without the computer.

The important lesson of the spreadsheet, the one I’m trying to get at, is that it got to the heart of what people in business wanted to do: it was a truly new, flexible, and better way to approach data, like finances, sales, and other numbers. It wasn’t just paper with the kinks worked out, it wasn’t just a faster horse, it was a real, new thing that solved their problems in better ways.

When talking in terms of Objective C development, I don’t mean “I’m dreaming of a replacement that’ll just let you create the exact same identical apps, it’ll just have fewer warts,” but I instead mean I’m dreaming of a new, fundamental way to approach building software, that will result in apps richer in the things we care about, like visual and graphic design, usability and interaction, polish, and yes, offer enhancements to the undercarriage, too.

It’s far from being just about a pretty interface, it’s about rethinking what we’re even trying to accomplish. We’re trying to make software that’s understandable, that’s powerful, that’s useful, and that will benefit both our customers and ourselves. And while I think we might eventually get there if we keep trotting along as we’re currently doing, I think we’re also capable of leaping forward. All it takes is some imagination and maybe a pinch of willingness.

Graphical Programming

When “graphical programming” is brought up around programmers the lament is palpable. To most, graphical programming translates literally into “pretty boxes with lines connecting them” something akin to UML, where the “graphical” part of programming is actually just a way for the graphics to represent code (but please do see GRAIL or here, a graphical diagramming tool designed in the late 1960s which still spanks the pants off most graphical coding tools today). This is not what I consider graphically programming to be. This is, at best, graphical coding, to which I palpably lament in agreement.

When I mention “graphical programming” I mean creating a graphical program (e.g., a view with colours and text) in a graphical way, like drawing out rectangles, lines, and text as you might do in Illustrator (see this by Bret Victor (I know, didn’t expect me to link to him right?) for probably my inspiration for this). When most people hear graphical programming, they think drawing abstract boxes (that probably generate code, yikes), but what I’m talking about is drawing the actual interface, as concretely as possible (and then abstracting the interface for new data).

There are loads of crappy attempts at the former, and very few attempts at all at the latter. There’s a whole world waiting to be attempted.

Interface Builder

Interface Builder is such an attempt at drawing out your actual, honest to science, interface in a graphical way, and it’s been moderately successful, but I think the tool falls tremendously short. Your interfaces unfortunately end up conceptually the same as mockups (“How do I see this with my real application data? How do I interact with it? How come I can only change certain view properties, but not others, without resorting to code?”). These deficiencies arise because IB is a graphical tool in a code-first town. Although it abides, IB is a second-class citizen so far as development tools go. Checkboxes for interface features get added essentially at Apple’s whim.

What we need is a tool where designing an interface means everything interface related must be a first-class citizen.

Compiled vs Interpreted

Oh my goodness do I really have to go there? After bleating for so many paragraphs about considering thinking beyond precisely what must be worked on right-here-and-now, so many get caught up on the Compiled-v-Interpreted part.

Just to be clear, I understand the following (and certainly, much more):

  1. Compiled applications execute faster than interpreted ones.
  2. Depending on the size of the runtime or VM, an interpreted language consumes more memory and energy than a compiled language.
  3. Some languages (like Java) are actually compiled to a kind of bytecode, which is then executed in a VM (fun fact: I used to work on the JVM at IBM as a co-op student).

All that being said, I stand by my original assertion that for the vast majority of the kinds of software most of us in the Objective C developer community build, the differences between the two styles of language in terms of performance are negligible, not in terms of absolute difference, but in terms of what’s perceptible to users. And that will only improve over time, as phones, tablets, and desktop computers all amaze our future selves by how handily they run circles around what our today selves even imagined possible.

If I leave you with nothing else, please just humour me about all this.

Objective Next

A few weeks ago, I posted my lengthy thoughts about Objective C, what’s wrong with it, and what I think will happen in the future with it:

Apple is a product-driven company, not a computing-driven company. While there are certainly many employees interested in the future of computing, Apple isn’t the company to drive [replacing it]. It’s hard to convince such a highly successful product company to operate otherwise.

So if a successor language is to emerge, it’s got to come from elsewhere. […] I’m not convinced a better Objective C, some kind of Objective Next is the right way to go. A new old thing is not really what we need. It seems absurd that 30 years after the Mac we still build the same applications the same ways. It seems absurd we still haven’t really caught up to Smalltalk. It seems absurd beautiful graphical applications are created solely and sorely in textual, coded languages. And it seems absurd to rely on one vendor to do something about it.

There has been lots of talk in the weeks since I posted my article criticizing Objective C, including a post by my friend Ash Furrow, Steve Streza, Guy English, and Brent Simmons. Much of their criticisms are similar or ring true to mine, but the suggestions for fixing the ills of Objective C almost all miss the point:

We don’t need a better Objective C; we need a better way to make software. We do that in two steps: figure out what we’re trying to accomplish, and then figure out how to accomplish it. It’s simple, but nearly every post about replacing Objective C completely ignores these two steps.

I work on programming languages professionally at Hopscotch, which I mention not so I can brag about it but so I can explain this is a subject I care deeply about, something I work on every day. This isn’t just a cursory glance because I’ve had some grumbles with the language. This essay is my way of critically examining and exploring possibilities for future development environments we can all benefit from. That requires stepping a bit beyond what most Objective C developers seem willing to consider, but it’s important nonetheless.

Figure out what we’re trying to make

We think we know what we want from an Objective C replacement. We want Garbage Collection, we want better concurrency, we don’t want pointers. We want better networking, better data base, better syncing functionality. We want a better, real runtime. We don’t want header files or import statements.

These are all really very nice and good, but they’re actually putting the CPU before the horse. If you ask most developers why they want any of those things, they’ll likely tell you it’s because those are the rough spots of Objective C as it exists today. But they’ll say nothing of what they’re actually trying to accomplish with the language (hat tip to Guy English though for being the exception here).

This kind of thinking is what Alan Kay refers to as “Instrumental thinking”, where you only think of new inventions in terms of how they can allow you to do your same precise job in a better way. Personal computing software has fallen victim to instrumental thinking routinely since its inception. A word processor’s sole function is to help you layout your page better, but it does nothing to help your writing (orthography is a technicality).

Such is the same with the thinking that goes around with replacing Objective C. Almost all the wishlists for replacements simply ask for wrinkles to be ironed out.

If you’re wondering what such a sandpapered Objective Next might look like, I’ll point you to one I drew up in early 2013 (while I too was stuck in the instrumental thinking trap, I’ll admit).

It’s easy to get excited about the (non-existing) language if you’re an Objective C programmer, but I’m imploring the Objective C community to try and think beyond a “new old-thing”, try to actually think of something that solves the bigger pictures.

When thinking about what could really replace Objective C, then, it’s crucial to clear your mind of the minutia and dirt involved in how you program today, and try to think exclusively of what you’re trying to accomplish.

For most Objective C developers, we’re trying to make high quality software, that looks and feels great to use. We’re looking to have a tremendous amount of delight and polish to our products. And hopefully, most importantly, we’re trying to build software to significantly improve the lives of people.

That’s what we want to do. That’s what we want to do better. The problem isn’t about whether or not our programming language has garbage collection, the problem is whether or not we can build higher quality software in a new environment than we could with Objective C’s code-wait-run cycle.

In the Objective C community, “high quality software” usually translates to visually beautiful and fantastically useable interfaces. We care a tremendous amount about how our applications are presented and understood by our users, and this kind of quality takes a mountain of effort to accomplish. Our software is usually developed by a team of programmers and a team of designers, working in syncopation to deliver on the high quality standards we’ve set for ourselves. More often than not, the programers become the bottleneck, if only because every other part of the development team must ultimately have their work funnelled through code at some point. This causes long feedback loops in development time, and if it’s frustrating to make and compare improvements to the design, it is often forgone altogether.

This strain trickles down to the rest of the development process. If it’s difficult to experiment, then it’s difficult to imagine new possibilities for what your software could do. This, in part, reinforces our instrumental thinking, because it’s usually just too painful to try and think outside the box. We’d never be able to validate our outside box thinking even if we wanted to! And thus, this too strains our ability to build software that significantly enhances the lives of our customers and users.

With whatever Objective C replacement there may be, whether we demand it or we build it ourselves, isn’t it best to think not how to improve Objective C but instead how make the interaction between programmer and designer more cohesive? Or how to shift some of the power (and responsibility) of the computer out of the hands of the programmer and into the arms of the designer?

Something as simple as changing the colour or size of text should not be the job of the programmer, not because the programmer is lazy (which is mostly certainly probably true anyway) but because these are issues of graphic design, of presentation, which the designer is surely better trained and more equipped to handle. Yet this sort of operation is almost agonizing from a development perspective. It’s not that making these changes are hard, but that it often requires the programmer to switch tasks, when and only when there is time, and then present changes to the designer for approval. This is one loose feedback loop and there’s no real good reason why it has to be this way. It might work out pretty well the other way.

Can you think of any companies where design is paramount?

When you’re thinking of a replacement for Objective C, remember to think of why you want it in the first place. Think about how we can make software in better ways. Think about how your designers can improve your software if they had more access to it, or how you could improve things if you could only see them.

This is not just about a more graphical environment, and it’s not just about designers playing a bigger role. It’s about trying to seek out what makes software great, and how our development tools could enable software to be better.

How do we do it?

If we’re going to build a replacement programming environment for Objective C, what’s it going to be made of? Most compiled languages can be built with LLVM quite handily these days—

STOP

We’ve absolutely got to stop and check more of our assumptions first. Why do we assume we need a compiled language? Why not an interpreted language? Objective C developers are pretty used to this concept, and most developers will assert compiled languages are faster than interpreted or virtual machine language (“just look at how slow Android is, this is because it runs Java and Java runs in a VM,” we say). It’s true that compiled apps are almost always going to be faster than interpreted apps, but the difference isn’t substantial enough to close the door on them so firmly, so quickly. Remember, today’s iPhones are just-as-fast-if-not-faster than a pro desktop computer ten years ago, and those ran interpreted apps just fine. While you may be able to point at stats and show me that compiled apps are faster, in practice the differences are often negligible, especially with smart programmers doing the implementation. So lets keep the door open on interpreted languages.

Whether compiled or interpreted, if you’re going to make a programming language then you definitely need to define a grammar, work on a parser, and—

STOP

Again, we’ve got to stop and check another assumption. Why make the assumption that our programming environment of the future must be textual? Lines of pure code, stored in pure text files, compiled or interpreted, it makes little difference. Is that the future we wish to inherit?

We presume code whenever we think programming, probably because it’s all most of us are ever exposed to. We don’t even consider the possibility that we could create programs without typing in code. But with all the abundance and diversity of software, both graphical and not, should it really seem so strange that software itself might be created in a medium other than code?

“Ah, but we’ve tried that and it sucked,” you’ll say. For every sucky coded programming language, there’s probably a sucky graphical programming language too. “We’ve tried UML and we’ve tried Smalltalk,” you’ll say, and I’ll say “Yes we did, 40 years of research and a hundred orders of magnitude ago, we tried, and the programming community at large decided it was a capital Bad Idea.” But as much as times change, computers change more. We live in an era of unprecedented computing power, with rich (for a computer) displays, ubiquitous high speed networking, and decades of research.

For some recent examples of graphical programming environments that actually work, see Bret Victor’s Stop Drawing Dead Fish and Drawing Dynamic Visualizations talks, or Toby Schachman’s (of Recursive Drawing fame) excellent talk on opening up programming to new demographics by going visual. I’m not saying any one of these tools, as is, are a replacement for Objective C, but I am saying these tools demonstrate what’s possible when we open our eyes, if only the tiniest smidge, and try to see what software development might look like beyond coded programming languages.

And of course, just because we should seriously consider non-code-centric languages doesn’t mean that everything must be graphical either. There are of course concepts we can represent linguistically which we can’t map or model graphically, so to completely eschew a linguistic interface to program creation would be just as absurd as completely eschewing a graphical interface to program creation in a coded language.

The benefits for even the linguistic parts of a graphical programming environment are plentiful. Consider the rich typographic language we forego when we code in plain text files. We lose the benefits of type choices, of font sizes and weight, hierarchical groupings. Even without any pictures, think how much more typographically rich a newspaper is compared to a plain text program file. In code, we’re relegated to fixed-width, same size and weight fonts. We’re lucky if we get any semblance of context from syntax highlighting, and it’s often a battle to impel programmers to use whitespace to create ersatz typographic hierarchies in code. Without strong typography, nothing looks any more important than anything else as you gawk at a source code file. Experienced code programmers can see what’s important, but they’re not seeing it with their eyes. Why should we, the advanced users of advanced computers, be working in a medium that’s less visually rich than even the first movable type printed books, five centuries old?

And that’s to say nothing of other graphical elements. Would you like to see some of my favourite colours? Here are three: #fed82f, #37deff, #fc222f. Aren’t they lovely? The computer knows how to render those colours better than we know how to read hex, so why doesn’t it do that? Why don’t we demand this of our programming environment?

Objective: Next

If we’re ever going to get a programming environment of the future, we should make sure we get one that’s right for us and our users. We should make sure we’re looking far down the road, not at our shoes. We shouldn’t try to build a faster horse, but we should instead look where we’re really trying to go and then find the best way to get there.

We also shouldn’t rely on one company to get us there. There’s still plenty to be discovered by plenty of people. If you’d like to help me discover it, I’d love to hear from you.

Read Speed of Light in Chronological Order

One of the features I wrote down when designing Speed of Light in 2010 was to be able to read the website in chronological order, the order in which it was originally written.

Now you can.

I wish every website had this feature.

NSNorth 2014  

I can’t believe I haven’t yet written about it, but Ottawa’s very own NSNorth is back this year and it’s looking to be better than ever (that’s saying a lot, considering I spoke at the last one!).

This year’s speaker lineup looks great, including Mattt Thompson, Jessie Char, and Don Melton.

Last year was a total blast, and Ottawa is probably Canada’s second prettiest city. You won’t regret it.

Code Reading  

Peter Seibel on code as literature:

But then it hit me. Code is not literature and we are not readers. Rather, interesting pieces of code are specimens and we are naturalists. So instead of trying to pick out a piece of code and reading it and then discussing it like a bunch of Comp Lit. grad students, I think a better model is for one of us to play the role of a 19th century naturalist returning from a trip to some exotic island to present to the local scientific society a discussion of the crazy beetles they found: “Look at the antenna on this monster! They look incredibly ungainly but the male of the species can use these to kill small frogs in whose carcass the females lay their eggs.”

I think it’s true that code is not literature, but I also think it’s kind of a bum steer to approach code like science. We investigate things in science because we have to. Nature has created the world a certain way, and there’s no way to make it understandable but to investigate it.

But code isn’t a natural phenomenon, it’s something made by people, and as such we have the opportunity (and responsibility) to make it accessible without investigation.

If we need to decode something, something that we ourselves make, I think that’s a sign we shouldn’t be encoding it in the first place.

(via Ryan McCuiag)

Authors of ‘The Federalist Papers’ request Facebook rename ‘Paper’

In a letter received by Speed of Light postmarked February 3rd, 2014, the authors of The Federalist Papers contend Facebook’s latest iPhone app, Paper, should be renamed. The authors, appearing under the pseudonym Publius, write:

It has been frequently remarked, that it seems to have been reserved to the creators of Facebook, by their conduct and example, they might choose to appropriate the name Paper for their own devices. We would like to see that changed.

The authors, predicting the counter-argument that the name “paper” is a common noun, write:

Every story has a name. Despite the fact the word “paper” is indeed a generic term, and despite the fact the original name of our work was simply The Federalist (Papers was later appended by somebody else), we nonetheless feel because our work was published first, we are entitled to the name Paper. The Federalist Papers have been circulating for more than two centuries, so clearly, we have a right to the name.

The polemic towards Facebook seems to be impelled by Facebook’s specific choice of title and location:

It is especially insulting since Facebook has chosen to launch Paper exclusively in America, where each of its citizens is well aware and well versed in the materials of The Federalist Papers. It is as though they believe citizens will be unaware of the source material from which Facebook Paper is inspired. This nation’s citizens are active participants in the nation’s affairs, and this move by Facebook is offensive to the very concept.

Publius provide a simple solution:

We believe it is the right of every citizen of this nation to have creative freedoms and that’s why we kindly ask Facebook to be creative and not use our name.

Objective C is a Bad Language But Not For The Reasons You Think It Is: Probably; Unless You’ve Programmed With it for a While In Which Case You Probably Know Enough To Judge For Yourself Anyway: The Jason Brennan Rant

When most programmers are introduced to Objective C for the first time, they often recoil in some degree of disgust at its syntax: “Square brackets? Colons in statements? Yuck” (this is a close approximation of my first dances with the language). Objective C is a little different, a little other, so naturally programmers are loathe to like it at first glance.

I think that reaction, that claiming Objective C is bad because of its syntax results from two factors which coincided: 1. The syntax looks very alien; 2. Most learned it because they wanted to jump in on iOS development, and Apple more or less said “It’s this syntax or the highway, bub” which put perceptions precariously between a rock and a hard place. Not only did the language taste bad, developers were also forced to eat it regardless.

But any developer with even a modicum of sense will eventually realize the Objective part of Objective C’s syntax is among its greatest assets. Borrowed largely (or shamelessly copied) from Smalltalk’s message sending syntax by Brad Cox (see Object Oriented Programming: An Evolutionary Approach for a detailed look at the design decisions behind Objective C), Objective C’s message sending syntax has some strong benefits over traditional object.method() method calling syntax. It allows for later binding of messages to objects, and perhaps most practically, reading code reads like a sentence, with parameters prefaced with their purpose in the message.

Objective C’s objects are pretty flexible when compared to similar languages like C++ (compare relative fun of extending and overriding parts of classes in Objective C vs C++), and can be extended at runtime via Categories or through runtime functions (more on those soon) itself, but Objective C’s objects pale in comparison to those of a Smalltalk-like environment, where objects are always live and browsable. Though objects can be extended at runtime, they seldom are, and are instead almost exclusively fully built by compile time (that is to say, yes lots of objects are allocated during the runtime of an application, and yes some methods are added via Categories which are loaded in at runtime, but rarely are whole classes added to the application at runtime).

This compiled nature, along with the runtime functions point to the real crux of what’s wrong with Objective C: the language is still feels quite tacked on to C. The language was in fact originally built as a preprocessor to C (c.f.: Cox), but over the years has been built up a little sturdier, all the while remaining still atop C. It’s a superset of C, so all C code is considered Objective C code, which includes:

  • Header files
  • The preprocessor
  • Functions
  • Manual Memory Management (ARC automates this, but it’s still “automatic-manual”)
  • Header files
  • Classes are fancy structs (now C++ structs according to Landon Fuller)
  • Methods are fancy function pointers
  • Header files
  • All of this is jury-rigged by runtime C (and/or C++) functions

In addition, Objective C has its own proper warts, including a lack of method visibility methods (like protected, private, partytime, and public), lacks class namespacing (although curiously protocols exist in their own namespace), require method declarations for public methods, lacks a proper importing system (yes, I’m aware of @import), suffers from C’s support of library linking because it lacks its own, has header files, has a weak abstraction for dynamically sending messages (selectors are not much more than strings), must be compiled and re-run to see changes, etc.

John Siracusa has talked at length about what kinds of problems a problemmed language like Objective C can cause in his Copland 2010 essays. In short, Objective C is a liability.

I don’t see Apple dramatically improving or replacing Objective C anytime soon. It doesn’t seem to be in their technical culture, which still largely revolves around C. Apple has routinely added language features (e.g., Blocks, modules) and libraries (e.g., libdispatch) at the C level. Revolving around a language like C makes sense when you consider Apple’s performance-driven culture (“it scrolls like butter because it has to”). iOS, Grand Central Dispatch, Core Animation, WebKit are all written at the C or C++ level where runtime costs are near-non-extant, where higher level concepts like a true object system can’t belong, due to the performance goals the company is ruled by.

Apple is a product-driven company, not a computing-driven company. While there are certainly many employees interested in the future of computing, Apple isn’t the company to drive it. It’s hard to convince such a highly successful product company to operate otherwise.

So if a successor language is to emerge, it’s got to come from elsewhere. I work on programming languages professionally at Hopscotch, but I’m not convinced a better Objective C, some kind of Objective Next is the right way to go. A new old thing is not really what we need. It seems absurd that 30 years after the Mac we still build the same applications the same ways. It seems absurd we still haven’t really caught up to Smalltalk. It seems absurd beautiful graphical applications are created solely and sorely in textual, coded languages. And it seems absurd to rely on one vendor to do something about it.

Understanding Software

Many of us create (and all of us use) software, but few if any of us has examined software as a medium. We bumble in the brambles blind to the properties of software, how we change it, and most importantly, how it changes us.

In this talk, I examine the medium of software, how it collectively affects us, and demonstrate what it means for new kinds of software we’re capable of making.

I will be presenting “Understanding Software” at Pivotal Labs NYC on February 25, and if you create or use software, I invite you to come.

Why do We Step on Broken Glass?

Have you ever stepped barefoot on a piece of broken glass and got it stuck in your foot? It was probably quite painful and you most likely had to go to the hospital. So why did you step on it? Why do we do things that hurt us?

The answer, of course, is we couldn’t see we were stepping on a piece of glass! Perhaps somebody had smashed something the night before and thought they’d swept up all the pieces, but here you are with a piece of glass in your foot. But the leftover pieces are so tiny, you can’t even see them. If you could see them, you certainly would not have stepped on them.

Why do we do harmful things to ourselves? Why do we pollute the planet and waste its resources? Why do we fight? Why do we discriminate and hate? Why do we ignore facts and instead trust mystics (i.e., religion)?

The answer, of course, is we can’t see all the things we’re doing wrong. We can’t see how bombs and drones harm others across the world because their’s is a world different from ours. We can’t see how epidemics spread because germs are invisible, and if we’re sick then we’re too occupied to think about anything else. We can’t see how evolution or global climate change could possibly be real because we only see things on a human lifetime scale, not over thousands or hundreds of years.

Humans use inventions to help overcome the limits of our perception. Microscopes and telescopes help us see the immensely small and the immensely large, levers and pulleys help us move the massive. Books help us hear back in time.

Our inventions can help us learn more about time and space, more about ourselves and more about everyone else, if we choose, but so frequently it seems we choose not to do that. We choose to keep stepping on glass, gleefully ignorant of why it happens. “This is how the world is,” we think, “that’s a shame.”

The most flexible and obvious tool we can use to help make new inventions is of course the computer, but it’s not going to solve these problems on its own, and it’s far from the end of the road. We need to resolve to invent better ways of understanding ourselves and each other, better ways of “seeing” with all our senses that which affects the world. We need to take a big step and stop stepping on broken glass.

Marco Arment Representing What’s Wrong and Elitist About Software Development  

Marco Arment:

But I’m not a believer that everyone should podcast, or that podcasting should be as easy as blogging. There’s actually a pretty strong benefit to it requiring a lot of effort: fewer bad shows get made, and the work that goes into a good show is so clear and obvious that the effort is almost always rewarded.

It’s fine to not believe everyone should podcast, but the concept that podcasting should not be easy, that it should be inaccessible and that it’s a good thing, is incredibly pompous and arrogant. It’s pompous and arrogant because it implies only those who have enough money to buy a good rig and enough time and effort to waste on editing (and yes, it is a waste if a better tool could do it with less time or effort) should be able to express themselves and be heard by podcasters. It says “If you can’t pay your dues, then you don’t deserve to be listened to.”

It would be like saying “blogging shouldn’t be as easy as typing in a web form, and if fewer people were able to do it, it’d make it better for everyone who likes reading blogs” (Marco, by the way, worked at Tumblr for many years), which is as absurd as it is offensive.

Podcasts, blogging, and the Web might not have been founded on meritocratic ideals, but I think it’s safe to say anyone who pays attention sees them as equalizers, that no matter how big or how small you are, you can have just as much say as anyone else. That it doesn’t always end up that way isn’t the point. The point is, these media bring us closer to an equal playing field than anything before.

Making a good podcast will never be as easy as writing text, and if you’re a podcast listener, that’s probably for the best.

Making a good podcast will never be as easy as writing text, except for the fact podcasts involve speaking, an innate human ability most of us learn around age 1, and we learn writing (a not-innate ability) at a later age. We spend many of our waking hours speaking, and few people write at any length.

If you’re a podcast listener, it’s probably for the best.

Riccardo Mori on the Stylus  

Responding to my “Who Wants a Stylus?” piece from earlier this week:

Now, as someone who handles a lot of text on lots of devices, here’s a stylus-based application I’d love to use: some sort of powerful writing environment in which I could, for example, precisely select parts of a text, highlight them, copy them out of their context and aggregate them in annotations and diagrams which could in turn maintain the link or links to the original source at all times, if needed.

Similarly, it would be wonderful if groups of notes, parts of a text, further thoughts and annotations, could be linked together simply by tracing specific shapes with the stylus, creating live dependences and hierarchies.

This is precisely the sort of thing I hoped to rouse with my essay, and I’m glad to hear the creative gears spinning. What Riccardo proposes sounds like a fantastic use of the stylus, and reminds me about what I’ve read on Doug Engelbart’s NLS system, too.

Who Wants A Stylus?

The stylus is an overlooked and under-appreciated mode of interaction for computing devices like tablets and desktop computers, with many developers completing dismissing it without even a second thought. Because of that, we’re missing out on an entire class of applications that require the precision of a pencil-like input device which neither a mouse nor our fingers can match.

Whenever the stylus as an input device is brought up, the titular quote from Steve Jobs inevitably rears its head. “You have to get ‘em, put ‘em away, you lose ‘em,” he said in the MacWorld 2007 introduction of the original iPhone. But this quote is almost always taken far out of context (and not to mention, one from a famously myopic man — that which he hated, he hated a lot), along with his later additional quote about other devices, “If you see a stylus, they blew it.”

What most people seem to miss, however, is Steve was talking about a class of devices whose entire interaction required a stylus and couldn’t be operated with just fingers. If every part of the device needed a stylus, then it’d difficult to use single-handedly, and deadweight were you to misplace the stylus. These devices, like the Palm PDAs of yesteryear were frustrating to use because of that, but it’s no reason to outlaw the input mechanism altogether.

Thus, Steve’s myopia has spread to many iOS developers. Developers almost unanimously assert the stylus is a bad input device, but again, I believe it’s because those quotes have put in our minds an unfair black and white picture: Either we use a stylus or we use our fingers.

“So let’s not use a stylus.”

Let’s imagine for a moment or two a computing device quite a lot like one you might already own. It could be a computing device you use with a mouse or trackpad and keyboard (like a Mac) or it could be a device you use with your fingers (like an iPad). Whatever the case, imagine you use such a device on a regular basis with solely the main input devices provided with the computer like you do. But this computer has one special property: it can magically make any kind of application you can dream of, instantly. This is your Computer of the Imagination.

One day, you find a package addressed to you has arrived on your doorstep. Opening it up, you discover something you recognize, but are generally unfamiliar with. It looks quite a bit like a pencil without a “lead” tip. It’s a stylus. Beside it in the package is a note that simply says “Use it with your computer.”

You grab your Computer of the Imagination and start to think of applications you can use which could only work with your newly arrived stylus. What do they look like? How do they work?

You think of the things you’d normally do with a pencil. Writing is the most obvious one, so you get your Computer of the Imagination to make you an app that lets you write with the stylus. It looks novel because, “Hey, that’s my handwriting!” on the screen, but you soon grow tired of writing with it. “This is much slower and less accurate than using a keyboard,” you think to yourself.

Next, you try making a drawing application. This works much better, you think to yourself, because the stylus provides accuracy you just couldn’t get with your fingers. You may not be the best at drawing straight lines or perfect circles, but thankfully your computer can compensate for that. You hold the stylus in your dominant hand while issuing commands with the other.

Your Computer of the Imagination grows bored and prompts you to think of another application to use with the stylus.

You think. And think. And think…


If you’re drawing a blank, then you’re in good company. I have a hard time thinking of things I can do with a stylus because I’m thinking in terms of what I can do with a pencil. I’ve grown up drawing and writing with pencils, but doing little else. If the computer is digital paper, then I’ve pretty much exhausted what I can do with analog paper. But of course, the computer is so much more than just digital paper. It’s dynamic, it lets us go back and forth in time. It’s infinite in space. It can cover a whole planet’s worth of area and hold a whole library’s worth of information.

But what could this device do if it had a different way to interact with? I’m not claiming the stylus is new, but to most developers, it’s at least novel. What kind of doors could a stylus open up?

“Nobody wants to use a stylus.”

I thought it’d be a good idea to ask some friends of mine their thoughts on the stylus as an input device, both on how they use one today, and what they think it might be capable of in the future (note these interviews were done in July 2013, I’m just slow at getting things published).

Question: How do you find support in apps for the various styluses you’ve tried?

Joe Cieplinski: I’ve mainly used it in Paper, iDraw, and Procreate, all of which seem to have excellent support for it. At least as good as they can, given that the iPad doesn’t have touch sensitivity. In other apps that aren’t necessarily for art I haven’t tried to use the stylus as much, so can’t say for sure. Never really occurred to me to tap around buttons and such with my stylus as opposed to my finger.

Ryan Nystrom: I use a Cosmonaut stylus with my iPad for drawing in Paper. The Cosmonaut is the only stylus I use, and Paper is the only app I use it in (also the only drawing app I use). I do a lot of prototyping and sketching in it on the go. I have somewhat of an art background (used to draw and paint a lot) so I really like having a stylus over using my fingers.

Dan Leatherman: Support is pretty great for the Cosmonaut, and it’s made to be pretty accurate. I find that different tools (markers, paintbrushes, etc.) adapt pretty well.

Dan Weeks: For non-pressure sensitive stylus support it’s any app and I’ve been known to just use the stylus because I have it in my hand. Not for typing but I’ve played games and other apps besides drawing with a stylus. Those all seem to work really well because of the uniformity of the nib compared to a finger.

Question: Do you feel like there is untapped (pardon the pun) potential for a stylus as an input device on iOS? It seems like most people dismiss the stylus, but it seems to me like a tool more precise than a finger could allow for things a finger just isn’t capable of. Are there new doors you see a stylus opening up?

Joe Cieplinski: I was a Palm user for a very long time. I had various Handspring Visors and the first Treo phones as well. I remember using the stylus quite a bit in all that time. I never lost a stylus, but I did find having to use two hands instead of one for the main user interface cumbersome.

The advantage of using a stylus with a Palm device was that the stylus was always easy to tuck back into the device. One of the downsides to using a stylus with an iPad is that there’s no easy place to store it. Especially for a fat marker style stylus like the Cosmonaut.

While it’s easy to dismiss the stylus, thanks to Steve Jobs’ famous “If you see a stylus, they blew it” quote, I think there are probably certain applications that could benefit more from using a more precise pointing device. I wouldn’t ever want a stylus to be required to use the general OS, but for a particular app that had good reason for small, precise controls, it would be an interesting opportunity. Business-wise, there’s also potential there to partner up between hardware and software manufacturers to cross promote. Or to get into the hardware market yourself. I know Paper is looking into making their own hardware, and Adobe has shown off a couple of interesting devices recently.

Ryan Nystrom: I do, and not just with styli (is that a word?). I think Adobe is on to something here with the Mighty.

I think there are 2 big things holding the iPad back for professionals: touch sensitivity (i.e. how hard you are pressing) and screen thickness.

The screen is honestly too thick to be able to accurately draw. If you use a Jot or any other fine-tip stylus you’ll see what I mean: the point of contact won’t precisely correlate with the pixel that gets drawn if your viewing angle is not 90 degrees perpendicular to the iPad screen. That thickness warps your view and makes drawing difficult once you’ve removed the stylus from the screen and want to tap/draw on a particular point (try connecting two 1px dots with a thin line using a Jot).

There also needs to be some type of pressure sensitivity. If you’re ever drawing or writing with a tool that blends (pencil, marker, paint), quick+light strokes should appear lighter than slow, heavy strokes. Right now this is just impossible.

Oleg Makaed: I believe we will see more support from the developers as styluses and related technology for iPad emerge (as to me, stylus is an early stage in life of input devices for tablets). As of now, developers can focus on solving existing problems: the fluency of stylus detection, palm interference with touch surface, and such.

Tools like the Project Mighty stylus and Napoleon ruler by Adobe can be very helpful for creative minds. Nevertheless, touch screens were invented to make the experience more natural and intuitive, and stylus as a mass product doesn’t seem right. Next stage might bring us wearable devices that extend our limbs and will act in a consistent way. The finger-screen relationships aren’t perfect yet, and there is still room for new possibilities.

Dan Leatherman: I think there’s definite potential here. Having formal training in art, I’m used to using analog tools, and no app (that I’ve seen) can necessarily emulate that as well as I’d like. The analog marks made have inconsistencies, but the digital marks just seem too perfect. I love the idea of a paintbrush stylus (but I can’t remember where I saw it).

Dan Weeks: I think children are learning with fingers but that finger extensions, which any writing implement is, become very accurate tools for most people. That may just have been the way education was and how it focused on writing so much, but I think it’s a natural extension that with something you can use multiple muscles to fine tune the 3D position of you’ll get good results.

I see a huge area for children and information density. With a finger in a child-focused app larger touch targets are always needed to account for clumsiness in pointing (at least so I’ve found). I imagine school children would find it easier to go with a stylus when they’re focused, maybe running math drills or something, but for sure in gesturing without blocking their view of the screen as much with hands and arms. A bit more density on screen resulting from stylus based touch targets would keep things from being too simple and slowing down learning.

Jason: What about the stylus as something for enhancing accessibility?

Doug Russell: I haven’t really explored styluses as an accessibility tool. I could see them being useful for people with physical/motor disabilities. Something like those big ol cosmonaut styluses would probably be well suited for people with gripping strength issues.

Dan Weeks: I’ve also met one older gentleman that has arthritis such that he can stand to hold a pen or stylus but holding his finger out to point with it grows painful over time. He loves his iPad and even uses the stylus to type with.


It seems the potential for the stylus is out there. It’s a precise tool, it enhances writing and drawing skills most of us already have, and it makes for more precise and accessible software than we can get with 44pt fingertips.

Creating a software application requiring a stylus is almost unheard of in the iOS world, where most apps are unfortunately poised for the lowest common denominator of the mass market. Instead, I see the stylus as an opportunity for a new breed of specialized, powerful applications. As it stands today, I see the stylus as almost entirely overlooked.

Yuck!

All the Things You Love to Do

In yesterday’s Apple Keynote, Phil Schiller used almost the exact same phrase while talking about the new Retina MacBook Pros (26:40):

For all the things you love to do: Reading your mail, surfing the Web, doing productivity, and even watching movies that you’ve downloaded from iTunes.

And about the iPad (65:15):

The ability to hold the internet in your hands, as you surf the web, do email, and make FaceTime calls.

It gave me pause to think, “If my computers can already do this, why then should I be interested in these new ones?” Surf the web, read email? My computers do this just fine.

Although Macs, iOS devices, and computers in general are capable of many forms of software, people seem resigned to the fact this sort of thing, “surf the web, check email, etc” is what computers are for, and I think people are resigned to this fact because it’s the message companies keep pushing our way.

The way Apple talks about it, it almost seems like it’s your duty, some kind of chore, “Well, I need a computer because I need to do those emails, and surf those websites,” instead of an enabling technology to help you think in clearer or more powerful ways. “You’re supposed to do these menial tasks,” they’re telling me, “and you’re supposed to do it on this computer.”

This would be like seeing a car commercial where the narrator said “With this year’s better fuel economy, you can go to all the places you love, like your office and your favourite restaurants.” I may be being a little pedestrian here, but it seems to me like car commercials are often focussing on new places the car will take you to. “You’re supposed to adventure,” they’re telling me, “and you’re supposed to do it in this car.”

What worries me isn’t Apple’s marketing. Apple is trying to sell computers and it does a very good job at it, with handsome returns. What worries me is people believing “computers are for surfing the web, checking email, writing Word documents” and nothing else. What worries me is computers becoming solely commodities, with software following suit.

How do you do something meaningful with software when the world is content to treat it as they would a jug of milk?

Stock and Flow  

Related to FOWMLOLAP, Robin Sloan:

But I’m not saying you should ignore flow! No: this is no time to hole up and work in isolation, emerging after long months or years with your perfectly-polished opus. Everybody will go: huh? Who are you? And even if they don’t—even if your exquisitely-carved marble statue of Boba Fett is the talk of the tumblrs for two whole days—if you don’t have flow to plug your new fans into, you’re suffering a huge (here it is!) opportunity cost. You’ll have to find them all again next time you emerge from your cave.

(Via Ryan McCuaig)

Dismantling “iOS Developer Discomfort”  

Ash Furrow, on some “new” developer techniques:

When I first saw some of the approaches, which I’ll outline below, I was uncomfortable. Things didn’t feel natural. The abstractions that I was so used to working in were useless to me in the new world.

Smart people like these don’t often propose new solutions to solved problems just because it’s fun. They propose new solutions when they have better solutions. Let’s take a look.

Hold on to your butts, here comes a good ol’ fashioned cross-examination.

On Declarative Programming,

Our first example is declarative programming. I’ve noticed that some experienced developers tend to shy away from mutating instance state and instead rely on immutable objects.

Declarative programming abstracts how computers achieve goals and instead focuses on what needs to be done, instead. It’s the difference between using a for loop and an enumerate function.

On the surface, I agree with this. But for different reasons. The primary reason why this is important, and why functional programming languages have similar or better advantages here is because they eliminate state. How you do that, whether by being declarative or by functional is in some ways irrelevant. The problem is, our current programming languages do a terrible job of representing logic and instead leakily abstract the computer (hey, objects are blocks of memory that seem an awful lot like partitions of RAM…), thus the state of an application becomes a hazard.

But there are also times when eliminating state isn’t an option, and in those cases Declarative languages fall short, too. State is sometimes requisite when dealing with systems, and in that case state should be shown. It’s a failure of the development environment to have hidden state. As Bret Victor says in his Learnable Programming, programming environments must either show or eliminate state. A language or environment which does neither is not going to make programming significantly better, and therefore will remain in obscurity.

Objective-C is an abomination (I love it anyway).

I agree. It’s an outdated aberration. We need something that’s much better. Not just a sugar-coating like Go was to C++ (this was completely intentional, mind you, but if we’re going to get a new programming language, it damn well better be leaps and bounds ahead of what we’ve got now).

It’s a message-oriented language masquerading as an object-oriented language build on top of C, an imperative language.

Actually, originally, the concepts are supposed to be inseparable. Alan Kay, who coined the term “Object Oriented Programming” used it to describe a system composed of “smaller computers” whose main strength was its components communicating through messages. Classes and Objects just sort of arose from those. Messages are tremendously misunderstood concept among Object Oriented Programmers. I’d highly suggest everyone do their reading.

It was hard to get into declarative programming because it stripped away everything I was used to and left me with tools I was unfamiliar with. My coding efficiency plummeted in the short term. It seemed like prematurely optimizing my code. It was uncomfortable.

I don’t think it makes me uncomfortable because it’s unfamiliar, but because things like Reactive Cocoa, grafted on to Objective C as they are, create completely bastardized codebases. They fight the tools and conventions every Cocoa developer knows, and naturally have a hard time existing in an otherwise stateful environment.

It’s inherent in what Reactive Cocoa is trying to accomplish, and would be inherent in anyone trying to graft on a new programming concept to the world of Cocoa. What we need is not a framework poised atop a rocky foundation, but a new foundation altogether. Reactive Cocoa tries to solve the problem of unmanageable code in entirely the wrong way. It’s the equivalent of saying “there are too many classes in this file, we should create a better search tool!” (relatedly, I think working with textual source code files in the first place severely constrains software development. But more on that in some future work I’ll publish soon).

Dot-syntax is a message-passing syntax in Objective-C that turns obj.property into [obj property] under the hood. The compiler doesn’t care either way, but people get in flame wars over the difference.

Dot-syntax isn’t involved with message passing, just involved with calling messages. It’s a subtle but important difference.

In the middle of the spectrum, which I like, is the use of dot-syntax for idempotent values. That means things like UIApplication.sharedApplication or array.count but not array.removeLastObject.

The logic is noble but I think still flawed. I think methods should be treated like methods and properties like properties because semantically they represent two different aspects of objects, because dot-syntax was designed specifically for properties, because Apple Developers advise against it and because it just makes them harder to search for. Not only that, but dots present early binding of methods to objects, which goes against the late-binding principles of the original Objective C design.

It’s also hard because Xcode’s autocomplete will not help you with methods like UIColor.clearColor when using the dot-syntax. Boo.

This is almost always a sign!

[Autolayout] promised to be a magical land where programming layouts was easy and springs and struts (auto resizing masks) were a thing of the past. The reality was that it introduced problems of its own, and Autolayout’s interface was non-intuitive for many developers.

This is another place where there’s an obvious flaw in the way things work in our development environment. Constraint-based systems are notoriously difficult because they normally require all variables to be solved simultaneously, leaving no room for flexibility. This flies in the face of what a computer program writer is used to, and thus is hard to rectify. When combined with an interface which invisibly presents (i.e., does not present) these constraints, developers are left with nothing short of a clusterfuck.

I’ve wanted to believe, and I’ve abandoned Autolayout every year since its introduction because of this. While it does improve year over year, I believe there are fundamental problems it won’t be able to overcome while sticking with the same paradigms we’ve got today.

Springs and struts are familiar, while Autolayout is new and uncomfortable. Doing simple things like vertically centring a view within its superview is difficult in Autolayout because it abstracts so much away.

It’s not because Autolayout is new and unfamiliar but because it adds more, but hidden elements to a layout. It doesn’t abstract too much away, but it makes it impossible to deal with what is presented.

And finally,

Change is hard. Maybe the iOS community will resist declarative programming, as has the web development community for two decades.

Except for the Hypertext Markup Language and Cascading Style Sheets (languages which generate these aren’t solely web development languages any more than Objective C is), of course. HTML is perhaps the most successful declarative system we’ve ever invented.

But actually finally,

We’re in a golden age of tools

I hope at this point it’s clear I disagree with this. Some may say at least today we’ve got the best tools we’ve ever had, but to them I’d suggest looking at any of the tools developed by Xerox PARC in the 70s, 80s, or 90s. And as far as today’s or yesterday’s tools go, I think we’re far from a land of opulence. But there is hope. Today we have at our disposal exceptionally fast, interconnected machines, far outpacing anything previously available. We have networks dedicated to educating about all kinds of toolmaking, from programming to information graphics design, to language design.

We’re in a golden age of opportunity, we just need to take a chance on it.

More Thoughts  

Relatedly, me, about a year ago:

I’ve been ruminating over this in my head for a while now. Why do I have a compulsion and anxiety to read it all? Why do I have to know? Why does it feel so important when I know it isn’t?

Fear Of Wasting My Life Online Looking At Pictures

FOWMLOLAP, for short.

You see, there’s this thing called FOMO: The Fear of Missing Out (which I’ve previously talked about on this site). It’s a phenomenon about the feeling you get when you see your peer’s activity online, particularly on social networks, and it gives you that empty feeling because you see all the things you’re not doing.

I like to think I am, to a degree at least, somewhat immune to the FOMO. I’m not totally unaffected by it, but I feel like I’m antisocial enough so that at least it doesn’t bother me too much to see others’ activities online.

What does bother me, more and more, is the fear that I’m wasting my life looking at pictures online. When I get to the heart of things, so much of my non-working life online is spent looking at pictures. There’s Instagram and Flickr and Tumblrs. There’s the stuttery sites like FFFFound (design porn) and Dribbble (design masturbation). Then there’s Twitter, which has its own share of photos or links to click (most of the links have lots of photos). There’s Macrumors and the Verge, and there’s my RSS reader, too. Although some of those sources have “news”, there’s almost always too much for me to read in a day, so I skip most of it. Back to the pictures.

These are my sites. When I’ve finished with them, I’ll start channel-changing back with the first ones all over again.

I’m not saying these websites are all bad or even any bad. I’m not saying there aren’t good aspects to them. I’m not saying everyone who uses them is wasting their lives.

I am saying, however, this is what I see myself doing. I have no fear of missing out because so often, WMLOLAP seems to be exactly what I want to do. And that’s why it gives me the Fear, because in reality, I so super very much do not want to do that.

This is my channel-changing-challenge.

Interview Between Doug Engelbart & Belinda Barnet, 10th Nov, 1999  

Engelbart:

[…] So for instance, in our environment, we would never have thought of having a separate browser and editor. Just everyone would have laughed, because whenever you’re working on trying to edit and develop concepts you want to be moving around very flexibly. So the very first thing is get those integrated.

Then [in NLS] we had it that every object in the document was intrinsically addressable, right from the word go. It didn’t matter what date a document’s development was, you could give somebody a link right into anything, so you could actually have things that point right to a character or a word or something. All that addressibility in the links could also be used to pick the objects you’re going to operate on when you’re editing. So that just flowed. With the multiple windows we had from 1970, you could start editing or copying between files that weren’t even on your windows.

Also we believed in multiple classes of user interface. You need to think about how big a set of functional capabilities you want to provide for a given user. And then what kind of interface do you want the user to see? Well, since the Macintosh everyone has been so conditioned to that kind of WIMP environment, and I rejected that, way back in the late 60s. Menus and things take so long to go execute, and besides our vocabulary grew and grew.

And the command-recognition [in the Augment system]. As soon as you type a few characters it recognises, it only takes one or two characters for each term in there and it knows that’s what’s happening.

Some Weekend Inspiration

I was moved by this bit from John Markoff’s “What the Dormouse Said”, a tale of 1960’s counter culture and how it helped create the personal computer:

Getting engaged precipitated a deep crisis for Doug Engelbart. The day he proposed, he was driving to work, feeling excited, when it suddenly struck him that he really had no idea what he was going to do with the rest of his life. He stopped the car and pulled over and thought for a while.

He was dumbstruck to realize that there was nothing that he was working on that was even vaguely exciting. He liked his colleagues, and Ames was in general a good place to work, but nothing there captured his spirit.

It was December 1950, and he was twenty-five years old. By the time he arrived at work, he realized that he was on the verge of accomplishing everything that he had set out to accomplish in his life, and it embarrassed him. “My God, this is ridiculous, no goals,” he said to himself.

That night, when he went home, he began thinking systematically about finding an idea that would enable him to make a significant contribution in the world. He considered general approaches, from medicine to studying sociology or economics, but nothing resonated. Then, within an hour, he was struck in a series of connected flashes of insight by a vision of how people could cope with the challenges of complexity and urgency that faced all human endeavors. He decided that if he could create something to improve the human capability to deal with those challenges, he would have accomplished something fundamental.

In a single stroke, Engelbart experienced a complete vision of the information age. He saw himself sitting in front of a large computer screen full of different symbols. (Later, it occurred to him that the idea of the screen probably came into his mind as a result of his experience with the radar consoles he had worked on in the navy.) He would create a workstation for organizing all of the information and communications needed for any given project. In his mind, he saw streams of characters moving on the display. Although nothing of the sort existed, it seemed the engineering should be easy to do and that the machine could be harnessed with levers, knobs, or switches. It was nothing less than Vannevar Bush’s Memex, translated into the world of electronic computing.


This bit resonated with me for several reasons, one of which will become clear in the coming weeks. But the really important thing isn’t just that Engelbart recognized a disastifaction with his life and how to fix it. It’s not that he had a stroke of vision to invent so much of what modern personal computers would (mostly incorrectly) base off. What’s really important is that he then went on to see his vision through.

Remember, this epiphany happened to him in 1950, and his groundbreaking “Mother of all Demos” presentation wasn’t until 1968. It might have seemed like something so grand had to come all at once (especially considering how long ago it was), but it took nearly two decades to be reached.

iPads for students halted after devices hacked  

Howard Blume:

Soon they were sending tweets, socializing on Facebook and streaming music through Pandora, they said.

L.A. Unified School District Police Chief Steven Zipperman suggested, in a confidential memo to senior staff obtained by The Times, that the district might want to delay distribution of the devices.

“I’m guessing this is just a sample of what will likely occur on other campuses once this hits Twitter, YouTube or other social media sites explaining to our students how to breach or compromise the security of these devices,” Zipperman wrote. “I want to prevent a ‘runaway train’ scenario when we may have the ability to put a hold on the roll-out.”

How dare kids enjoy technology. They’re supposed to be learning, not enjoying!

NSA Foils Much Internet Encryption  

Nicole Perlroth for my employer, the New York Times:

Many users assume — or have been assured by Internet companies — that their data is safe from prying eyes, including those of the government, and the N.S.A. wants to keep it that way. The agency treats its recent successes in deciphering protected information as among its most closely guarded secrets, restricted to those cleared for a highly classified program code-named Bullrun, according to the documents, provided by Edward J. Snowden, the former N.S.A. contractor.

Beginning in 2000, as encryption tools were gradually blanketing the Web, the N.S.A. invested billions of dollars in a clandestine campaign to preserve its ability to eavesdrop. Having lost a public battle in the 1990s to insert its own “back door” in all encryption, it set out to accomplish the same goal by stealth.

Don’t worry about it though. What can be done, right?

Jeffrey Bezos, Washington Post’s next owner, aims for a new ‘golden era’ at the newspaper  

Paul Farhi and Craig Timberg for the Washington Post:

But Bezos suggested that the current model for newspapers in the Internet era is deeply flawed: “The Post is famous for its investigative journalism,” he said. “It pours energy and investment and sweat and dollars into uncovering important stories. And then a bunch of Web sites summarize that [work] in about four minutes and readers can access that news for free. One question is, how do you make a living in that kind of environment? If you can’t, it’s difficult to put the right resources behind it. . . . Even behind a paywall [digital subscription], Web sites can summarize your work and make it available for free. From a reader point of view, the reader has to ask, ‘Why should I pay you for all that journalistic effort when I can get it for free’ from another site?”

Why indeed.

Whatever the mission, he said, The Post will have “readers at its centerpiece. I’m skeptical of any mission that has advertisers at its centerpiece. Whatever the mission is, it has news at its heart.”

There you have it. All the major newspaper companies are shrinking, but now the Washington Post has outside investment, allowing it to experiment with new models and for discovering its future.

If the Web is eating your business from the low end, and your competitor has newfound deep pockets, where does that leave your business?

Autoworkers of Our Generation  

Greg Baugues:

Fifty years ago, an autoworker could provide a middle-class existence for his family. Bought a house. Put kids through college. Wife stayed home. He didn’t even need a degree.

That shit’s over. Detroit just went bankrupt.

No one’s got it better than developers right now. When the most frequent complaint you hear is “I wish recruiters would stop spamming me with six-figure job offers,” life’s gotten pretty good.[…]

No profession stays on top forever… just ask your recently graduated lawyer friends.

Although the autoworkers analogy works, I think there’s a better one for current software developers: We’re like those who were capable of writing long before that ability was shared with the masses.

We have forms and means to express ourselves which are superior (in their own ways) to static writings. For instance, I can write an essay and you can read exactly the thoughts I decided you should read. But I can write a piece of software to also express those points — and other arguments as well — and you the “reader” get to explore my thoughts and in a sense, ask my “thoughts” questions. This is a superior trait over the plain written word.

Since we software developers can express thoughts in ways people who can “only” read and write cannot, we are quite like the privileged folk of centuries past, who could express thoughts in written word which exceeded what could be spoken.

The question is, should we milk it for what it’s worth or should we embrace it as a moral responsibility to give everybody this form of expression?

Everything You See Is Future Trash  

The article on Garbage I linked to earlier reminded me of this 2009 interview with Robin Nagle of the Department of Sanitation of New York. It’s a thought-provoking interview about our treatment of garbage, how we ignore it, and how we stigmatize it. I think bringing these things to the surface might be wise.

Garbage is generally overlooked because we create so much of it so casually and so constantly that it’s a little bit like paying attention to, I don’t know, to your spit, or something else you just don’t think about. You—we—get to take it for granted that, yeah, we’re going to create it, and, yeah, somebody’s going to take care of it, take it away. It’s also very intimate. There’s very little we do in twenty-four hours except sleeping, and not always even sleeping, when we don’t create some form of trash. Even just now, waiting for you, I pulled out a Kleenex and I blew my nose and I threw it out, in not even fifteen seconds. There’s a little intimate gesture that I don’t think about, you don’t think about, and yet there’s a remnant, there’s a piece of debris, there’s a trace.[…]

Well, it’s cognitive in that exact way: that it is quite highly visible, and constant, and invisibilized. So from the perspective of an anthropologist, or a psychologist, or someone trying to understand humanness: What is that thing? What is that mental process where we invisibilize something that’s present all the time?

The other cognitive problem is: Why have we developed, or, rather, why have we found ourselves implicated in a system that not only generates so much trash, but relies upon the accelerating production of waste for its own perpetuation? Why is that OK?

And a third cognitive problem is: Every single thing you see is future trash. Everything. So we are surrounded by ephemera, but we can’t acknowledge that, because it’s kind of scary, because I think ultimately it points to our own temporariness, to thoughts that we’re all going to die.[…]

It’s an avoidance of addressing mortality, ephemerality, the deeper cost of the way we live. We generate as much trash as we do in part because we move at a speed that requires it. I don’t have time to take care of the stuff that surrounds me every day that is disposable, like coffee cups and diapers and tea bags and things that if I slowed down and paid attention to and shepherded, husbanded, nurtured, would last a lot longer. I wouldn’t have to replace them as often as I do. But who has time for that? We keep it cognitively and physically on the edges as much as we possibly can, and when we look at it head-on, it betrays the illusion that everything is clean and fine and humming along without any kind of hidden cost. And that’s just not true.

And:

That sort of embarrassment is directed at people on the job every day on the street, driving the truck and picking up the trash.

People assume they have low IQs; people assume they’re fake mafiosi, wannabe gangsters; people assume they’re disrespectable. Unlike, say, a cop or a firefighter. And I do believe very strongly it’s the most important uniformed force on the street, because New York City couldn’t be what we are if sanitation wasn’t out there every day doing the job pretty well.

And the health problems that sanitation’s solved by being out there are very, very real, and we get to forget about them. We don’t live with dysentery and yellow fever and scarlet fever and smallpox and cholera, those horrific diseases that came through in waves. People were out of their minds with terror when these things came through. And one of the ways that the problem was solved—there were several—but one of the most important was to clean the streets. Instances of communicable and preventable diseases dropped precipitously once the streets were cleaned. Childhood diseases that didn’t need to kill children, but did. New York had the highest infant mortality rates in the world for a long time in the middle of the nineteenth century. Those rates dropped. Life expectancy rose. When we cleaned the streets! It seems so simple, but it was never well done until the 1890s, when there was this very dramatic transformation.

You should just read the interview.