Speed of Light
The Fantasy and Abuse of the Manipulable User  

Some quotes from Betsy Haibel’s must read essay. If you make or use software, you should read it.

Deceptive linking practices – from big flashing “download now” buttons hovering above actual download links, to disguising links to advertising by making them indistinguishable from content links – may not initially seem like violations of user consent. However, consent must be informed to be meaningful – and “consent” obtained by deception is not consent.

Consent-challenging approaches offer potential competitive benefits. Deceptive links capture clicks – so the linking site gets paid. Harvesting of emails through automatic opt-in aids in marketing and lead generation. While the actual corporate gain from not allowing unsubscribes is likely minimal – users who want to opt out are generally not good conversion targets – individuals and departments with quotas to meet will cheer the artificial boost to their mailing list size.

These perceived and actual competitive advantages have led to violations of consent being codified as best practices, rendering them nigh-invisible to most tech workers. It’s understandable – it seems almost hyperbolic to characterize “unwanted email” as a moral issue. Still, challenges to boundaries are challenges to boundaries. If we treat unwanted emails, or accidentally clicked advertising links, as too small a deal to bother, then we’re asserting that we know better than our users what their boundaries are. In other words, we’re placing ourselves in the arbiter-of-boundaries role which abuse culture assigns to “society as a whole.”[…]

The industry’s widespread individual challenges to user boundaries become a collective assertion of the right to challenge – that is, to perform actions which are known to transgress people’s internally set or externally stated boundaries. The competitive advantage, perceived or actual, of boundary violation turns from an “advantage” over the competition into a requirement for keeping up with them.

Individual choices to not fall behind in the arms race of user mistreatment collectively become the deliberate and disingenuous cover story of “but everyone’s doing it.”[…]

The hacker mythos has long been driven by a narrow notion of “meritocracy.” Hacker meritocracy, like all “meritocracies,” reinscribes systems of oppression by victim-blaming those who aren’t allowed to succeed within it, or gain the skills it values. Hacker meritocracy casts non-technical skills as irrelevant, and punishes those who lack technical skills. Having “technical merit” becomes a requirement to defend oneself online. […]

It’s easy to bash Zynga and other manufacturers of cow clickers and Bejeweled clones. However, the mainstream tech industry has baked similar compulsion-generating practices into its largest platforms. There’s very little psychological difference between the positive-reinforcement rat pellet of a Candy Crush win and that of new content in one’s Facebook stream.[…]

I call on my fellow users of technology to actively resist this pervasive boundary violation. Social platforms are not fully responsive to user protest, but they do respond, and the existence of actual or potential user outcry gives ethical tech workers a lever in internal fights about user abuse.

Facebook Rooms  

Inspired by both the ethos of these early web communities and the capabilities of modern smartphones, today we’re announcing Rooms, the latest app from Facebook Creative Labs. Rooms lets you create places for the things you’re into, and invite others who are into them too.[…]

Not only are rooms dedicated to whatever you want, room creators can also control almost everything else about them. Rooms is designed to be a flexible, creative tool. You can change the text and emoji on your like button, add a cover photo and dominant colors, create custom “pinned” messages, customize member permissions, and even set whether or not people can link to your content on the web. In the future, we’ll continue to add more customizable features and ways to tweak your room. The Rooms team is committed to building tools that let you create your perfect place. Our job is to empower you.

My guess is Rooms is a more strategic move to try and attract teens who seek privacy in apps like Snapchat. Seems like a good place for a clique.

When Women Stopped Coding  

Steve Henn for NPR:

A lot of computing pioneers — the people who programmed the first digital computers — were women. And for decades, the number of women studying computer science was growing faster than the number of men. But in 1984, something changed. The percentage of women in computer science flattened, and then plunged, even as the share of women in other technical and professional fields kept rising.

What happened?

This is something Hopscotch is trying to change.

Swift for Beginners

Last week I published a little essay about Swift, imploring iOS developers to start learning Swift today:

The number one question I hear from iOS developers when Swift comes up is “Should you switch to Swift?” and my answer for that is “Probably yes.” It’s of course not a black and white answer and depends on your situation, but if you’re an experienced Objective C programmer, now is a great time to start working in Swift.

In last week’s iOS Dev Weekly Dave Verwer astutely pointed out, however, this advice is really aimed at experienced iOS developers:

Jason talks mainly about experienced iOS developers in this article but I believe it’s a whole different argument for those who are just getting started.

The intersection of programming languages and learning programming happens to be precisely my line of work, so I thought I’d offer some advice for beginner iOS developers and Swift.

Should you learn Swift or Objective C?

For newcomers to the iOS development platform, I reckon the number one question asked is “Should I learn with Objective C or with Swift?” Contrary to what some may say (e.g., Big Nerd Ranch), I suggest you start learning with Swift first.

The main reason for this argument is cruft: Swift doesn’t have much and Objective C has a whole bunch. Swift’s syntax is much cleaner than Objective C which means a beginner won’t get bogged down with unnecessary details that would otherwise trip them up (e.g., header files, pointer syntax, etc.).

I used to teach a beginners iOS Development course and while most learners could grasp the core concepts easily, they were often tripped up by the implementation details of Objective C oozing out the seams. “Do I need a semicolon here? Why do I need to copy and paste this method declaration? Why don’t ints need a pointer star? Why do strings need an @ sign?” The list goes on.

When you’re learning a new platform and a new language, you have enough of an uphill battle without having to deal with the problems of a 1980s programming language.

In place of Objective C’s header files, importing, and declaring of methods, Swift has just one file with a single declaration and implementation of methods, with no need to import files within the same module. There goes all that complexity right out the window. In place of Objective C’s pointer syntax, in Swift both reference and value types use the same syntax.

Learning Xcode and Cocoa and iOS development all at once is a monumental task, but if you learn it with Swift first you’ll have a much easier time taking it all in.

Swift is ultimately a bigger language than Objective C, with features like advanced enums, Generics/Templates, tuples, operator overloading, etc. There is more Swift to learn but Cocoa was written in Objective C and it doesn’t make use of these features, so they’re not as essential for doing iOS development today. It’s likely that in the coming years Cocoa will adopt more Swift language features, so it’s still good to be familiar with them, but the fact is learning a core amount of Swift is much more straightforward than learning a core amount of Objective C.

A Note about Learning Programming with Swift

I should point out I’m not necessarily advocating for learning Swift as your first programming language, however, but instead suggesting if you’re a developer who’s new to iOS development, you should start with Swift.

If you’re new to programming, there are many better languages for learning, like Lisp, Logo, or Ruby to name just a few. You may very well be able to cut your teeth learning programming with Swift, but it’s not designed as a learning language and has a programming mental model of the “you are a programmer managing computer resources” kind.

Learning Objective C

You should start out learning iOS development with Swift, but once you become comfortable, you should learn Objective C too.

Objective C has been the programming language for iOS since its inception, so there’s lots of it out there in the real world, including books, blog posts, and other projects and frameworks. It’s important to know how to read and write Objective C, but the good news is once you’ve become decent with Swift, programming with Objective C isn’t much of a stretch.

Although their syntaxes differ in some superficial ways, the kind of code you write is largely the same between the two. -viewDidLoad and viewDidLoad() may be implemented in different syntaxes, but what you’re trying to accomplish is basically the same in either case.

The difficult part about learning Objective C after learning Swift, then, is not learning Cocoa and its concepts but instead the earlier mentioned syntactic salt that comes with the language. Because you already know a bit about view controllers and gesture recognizers, you’ll have a much easier time figuring out the oddities of a less modern syntax than you would have if you tried to learn them both at the same time. It’s much easier to adapt this way.

Learn Swift

Perhaps the biggest endorsement for learning Swift comes from Apple:

Swift is a successor to the C and Objective-C languages.

It doesn’t get much clearer than that.

Patterns to Help You Destroy Massive View Controller  

Soroush Khanlou:

View controllers become gargantuan because they’re doing too many things. Keyboard management, user input, data transformation, view allocation — which of these is really the purview of the view controller? Which should be delegated to other objects? In this post, we’ll explore isolating each of these responsiblities into its own object. This will help us sequester bits of complex code, and make our code more readable.

Hopscotch and Mental Models

On May 8 2014, after many long months of work, we finally shipped Hopscotch 2.0, which was a major redesign of the app. Hopscotch is an interactive programming environment on the iPad for kids 8 and up, and while the dedicated learners used our 1.0 with great success, we wanted to make Hopscotch more accessible for more kids who may have otherwise struggled. Early on, I pushed for a rethinking of the mental model we wanted to present our programmers so they could better grasp the concept. While I pushed some of the core ideas, this was of course a complete team effort. Every. Single. Member. of our (admittedly small!) team contributed a great deal over many long discussions and long days building the app.

What follows is an examination of mental models, and the models used in various versions of Hopscotch.

Mental models

The human brain is 100000 year old hardware we’re relatively stuck with. Applications are software created by 100000 year old hardware. I don’t know which is scarier, but I do know it’s a lot easier to adapt the software than it is to adapt the hardware. A mental model is how you adapt your software to the human brain.

A mental model is a device (in the “literary device” sense of the word) you design for humans to use, knowingly or not, to better grasp concepts and accomplish goals with your software. Mental models work not by making the human “play computer” but by making the computer “play human,” thus giving the person a conceptual framework to think in while using the software.

The programming language Logo uses the mental model of the Turtle. When children program the Turtle to move in a circle, they teach it in terms of how they would move in a circle (“take a step, turn a bit, over and over until you make a whole circle”) (straight up just read Mindstorms).

There are varying degrees of success in a program’s mental model, usually correlating to the amount of thought the designers put into the model itself. A successful mental model results in the person having a strong connection with the software, where a weak mental model leaves people confused. The model of hierarchical file systems (e.g., “files and folders”) has long been a source of consternation for people because it forces them to think like a computer to locate information.

You may know your application’s mental model very intimately because you created it but most people will not be so fortunate when they start out. The easiest way to understand your application’s mental model is by having smaller leaps to make—for example, most iPhone apps are far more alike than they are different—so people they don’t have to tread too far into the unknown.

One of the more effective tricks we employ in graphical user interfaces is the spatial analogy. Views push and pop left and right on the screen, suggesting the application exists in a space extending beyond the bounds of the rectangle we stare at. Some applications offer a spatial analogy in terms of a zooming interface, like a Powers of Ten but for information (or “thought vectors in concept space”, to quote Engelbart) (see Jef Raskin’s The Humane Interface for a thorough discussion on ZUIs).

These spatial metaphors can be thought of as gestures in the Raskinian sense of the term (defined as “…an action that can be done automatically by the body as soon as the brain ‘gives the command’. So Cmd+Z is a gesture, as is typing the word ‘brain’”) where instead of acting, the digital space provides a common, habitual environment for performing actions. There is no Raskinian mode switch because the person already has familiarity with the space.

Hopscotch 1.x

Following in the footsteps of the Logo turtle, Hopscotch characters are programmed in the same egocentric mental model (here’s a video of programming one character in Hopscotch 1.0). If I want Bear to move in a circle, I first ponder how I would move in a circle and translate this to Hopscotch blocks. If this were all to the story, this mental model would be pretty sufficient. But Hopscotch projects can have multiple programmed characters executing at the same time. Logo’s model works well because it’s clear there is one turtle to one programmer, but when there are multiple characters to take care of, it’s conceptually more of a stretch to program them all this way.

Hopscotch 1.0 was split diametrically between the drag and drop code blocks for various the various characters in your project and the Stage, the area where your program executes and your characters wiggle their butts off, as directed. This division is quite similar to the “write code / execute program” model most programming environments provide developers, but that doesn’t mean it’s appropriate (for children or professionals). Though the characters were tangible (tappable!) on the Stage, they remained abstract in the code editor. Simply put, there wasn’t a strong connection between your code and your program. This discord made it very difficult for beginners to connect their code to their characters.

Hopscotch 2.x

In the redesign, we unified the Stage-as-player with the Stage-as-Editor. In the original version, you programmed your characters by switching between tabs of code, but in the redesign you see your characters as they appear on the Stage. Gone are the two distinct modes, instead you just have the Stage, which you can edit. This means you no longer position characters with a small graph, but instead pick up your characters and place them directly.

The code blocks, which used to live in the tabs now lives inside the characters themselves. This gives a stronger mental model of “My character knows how to draw a circle because I programmed her directly”. When you tap a character you see a list of “Rules” appear as thought bubbles beside the character. Rules are mini-programs for each character that are played for different events (e.g., “When the iPad is tilted, make Bear dance") and you edit their code by tapping into them. This concept attaches the abstract concept of “code” to the very spatial and tangible characters you’re trying to program, and we found beginners could grasp this concept much quicker than the original model.

Along the way, we added “little things” like custom functions and a mini code preview that highlights code blocks as it executes, to let programmers quickly see the results of their changes for the character they’re programming. These aren’t additions to the mental model per se, but they do help close the gap between abstract code and your characters following your program.

Our redesign got a lot of love. We were featured by Apple, Recode, and even the Grubers gave us some effusive love. It wasn’t perfect, and we’ve continued to work on the design in the intervening months, but it’s been a big improvement.

A mental framework

A strong mental model benefits the people using your software because it helps them meet both ways. But mental models also help you as a designer to understand the messages you send through your application. By rethinking our mental model for Hopscotch, we dramatically improved both how we build the program and how people use it, and it’s given us a framework to think in for the future. As you build or use applications, be aware of the signals you send and receive and it will help you understand the software better.

Swift

After playing with Swift in my spare time for most of the Summer and after now using Swift full time at Hopscotch for about a month now, I thought I’d share some of my thoughts on the language.

The number one question I hear from iOS developers when Swift comes up is “Should you switch to Swift?” and my answer for that is “Probably yes.” It’s of course not a black and white answer and depends on your situation, but if you’re an experienced Objective C programmer, now is a great time to start working in Swift. I would suggest switching to it full time for all new iOS work (I wouldn’t recommend going back and re-writing your old Objective C code, but maybe replace bits and pieces of it as you see fit).

(If you’re a beginner to iOS development, see my thoughts on Swift for Beginners)

Idiomatic Swift

One reason I hear for developers wanting to hold off is “Swift is so new there aren’t really accepted idioms or best practices yet, so I’m going to wait a year or two for those to emerge.” I think that’s a fair argument, but I’d argue it’s better for you to jump in and invent them now instead of waiting for somebody else to do it.

I’m pretty sure when I look back on my Swift code a year from now I’ll cringe from embarrassment, but I’d rather be figuring it all out now, I’d rather be helping to establish what Good Swift looks like than just see what gets handed down. The conventions around the language are malleable right now because nothing has been established as Good Swift yet. It’s going to be a lot harder to influence Good Swift a year or two from now.

And the sooner you become productive in Swift, the sooner you’ll find areas where it can be improved. Like the young Swift conventions, Swift itself is a young language—the earlier in its life you file Radars and suggest improvements to the language, the more likely those improvements will be made. Three years from now Swift The Language is going to be a lot less likely to change compared to today. Your early radars today will have enormous effects on Swift in the future.

Swift Learning Curve

Another reason I hear about not wanting to learn Swift today is not wanting to take a major productivity hit while learning the language. From my experience, as an experienced iOS developer is you’ll be up to speed in a week or two with Swift, and then you’ll get all the benefits of Swift (even just not having header files or having to import files all over the place makes programming in Swift so much nicer than Objective C, you might not want to go back).

In that week or two when you’re a little slower at programming than you are with Objective C, you’ll still be pretty productive anyway. You certainly won’t become an expert in Swift (because nobody except maybe Chris Lattner is yet anyway!) right away, but you’ll be writing arguably cleaner code, and, you might even have some fun doing it.

I’ve been keeping a list of tips I’ve picked up while learning Swift so far, like how the compatibility with Objective C works, and how to do certain things (like weak capture lists for closures). If you have any suggestions of your own, feel free to send me a Pull Request.

Grab Bag of Various Caveats

  • I don’t fully understand initializers in Swift yet, but I kind of hate them. I get the theory behind them, that everything strictly must be initialized, but in practice this super sucks. This solves a problem I don’t think anybody really had.

  • Compile times for large projects suck. They’re really slow, because (I think) any change in any Swift file causes all the Swift files to be recompiled on build. My hunch is by breaking your project up into smaller Modules (aka Frameworks) this should relieve the slow build times. I haven’t tried this yet.

  • The Swift debugger feels pretty non-functional most of the time. I’m glad we have Playgrounds to test out algorithms and such, but unfortunately I’ve had to mainly resort to pooping out println()s of debug values.

  • What the hell is with the name println()? Would it have killed them to actually spell out the word printLine()? Do the language designers know about autocomplete?

  • The “implicitly unwrapped optional operator” (!) should really either be called the “subversion operator” or the “crash operator.” The major drumbeat we’ve been told about Swift is it’s supposed to not allow you to do unsafe things, hence (among other things) we have Optionals. By using the implicitly unwrapping the optional, we’re telling the compiler “I know better than you right now, so I’m just going to go ahead and subvert the rules and pretend this thing isn’t nil, because I know it’s not.” When you do this, you’re either going to be correct, in which case Swift was wrong for thinking a value was maybe going to be nil when it isn’t; or you’re going to be incorrect and cause your application to crash.

Objective Next

Earlier this year, before Swift was announced, I published an essay, Objective Next which discussed replacing Objective C, both in terms of the language itself and, more importantly, what we should thirst for in a successor:

We don’t need a better Objective C; we need a better way to make software. We do that in two steps: figure out what we’re trying to accomplish, and then figure out how to accomplish it. It’s simple, but nearly every post about replacing Objective C completely ignores these two steps.

In short, a replacement for Objective C that just offers a slimmed down syntax isn’t really a real victory at all. It’s simply a new-old thing. It’s a new way to accomplish the exact same kind of software. In a followup essay, I wrote:

If there was one underlying theme of the essay, it was “Don’t be trapped by Instrumental Thinking”, that particularly insidious kind of thinking that plagues us all (myself included) to thinking about new ideas or technologies only in terms of what we’re currently doing. That is, we often can only see or ask for a new tool to benefit exactly the same job we’re currently doing, where instead we should consider new kinds of things it might enable us to do. […]

When talking in terms of Objective C development, I don’t mean “I’m dreaming of a replacement that’ll just let you create the exact same identical apps, it’ll just have fewer warts,” but I instead mean I’m dreaming of a new, fundamental way to approach building software, that will result in apps richer in the things we care about, like visual and graphic design, usability and interaction, polish, and yes, offer enhancements to the undercarriage, too.

This, unfortunately, is exactly what we got with Swift. Swift is a better way to create the exact same kind of software we’ve been making with Objective C. It may crash a little less, but it’s still going to work exactly the same way. And in fact, because Swift is far more static than Objective C, we might even be a little bit more limited in terms of what we can do. For example, as quoted in a recent link:

The quote in this post’s title [“It’s a Coup”], from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

I still think Swift is a great language and you should use it, but I do find it lamentably not forward-thinking. The intentional lack of a garbage collector really sealed the deal for me. Swift isn’t a new language; it’s C++++. I am glad to get to program in it, and I think the more people using it today, the better it will be tomorrow.

Belief

I’ve had this thought stuck in my head for a few months about Beliefs I thought might be useful to share. The thought goes something like this:

A belief about something is scaffolding we should use until we’ve learned more truths about that something.

First, I should point out I don’t think this statement is necessarily entirely true (though it could be), but I do think it’s a useful starting point for a discussion. Second, I also don’t think this view on belief is widely practiced, but I do think it would make for more productive use of beliefs themselves.

We humans tend to be a very belief-based bunch. There are the obvious beliefs like religion and other similar deifications (“What would our forefathers think?”) but we hold strong beliefs all the time without even realizing it.

The public education systems in North America (as I experienced firsthand in Canada and as I’ve read about in America) are based on students believing and internalizing a finite set of “truths” (this is known as a curriculum) and taking precisely those beliefs as granted.

Science presents perhaps the best evidence we’re largely a belief-based species as science exists to seek truths our beliefs are not adapted to explaining. Before the invention of science, we relied on our beliefs to make sense of the world as best we could, but beliefs painted a blurry, monochromatic picture at best. Science is hard because it has to be hard—its job is to adapt parts of the universe which we can’t intuit into something we can place in our concept of reality—but it does a much superior job at explaining reality than our beliefs do.

A friend of mine recently told me “I have beliefs about the world just like everybody else…I just don’t trust them, is all” I think that’s a productive way to think about beliefs. It would probably be impossible to rid the world of belief, but I think a better approach is to acknowledge and understand belief as a useful, temporary tool. We should teach people to think about belief as a useful means to an end, as a support system, until more is learned about something. Most importantly, we should teach that beliefs should have a shelf-life, and not be permanently trusted.

The Future Programming Manifesto  

Jonathan Edwards:

Most of the problems of software arise because it is too complicated for humans to handle. We believe that much of this complexity is unnecessary and indeed self-inflicted. We seek to radically simplify software and programming. […]

We should measure complexity as the cumulative cognitive effort to learn a technology from novice all the way to expert. One simple surrogate measure is the size of the documentation. This approach conflicts with the common tendency to consider only the efficiency of experts. Expert efficiency is hopelessly confounded by training and selection biases, and often justifies making it harder to become an expert. We are skeptical of “expressive power” and “terseness”, which are often code words for making things more mathematical and abstract. Abstraction is a two-edged sword.

It’s a Coup  

Michael Tsai:

The quote in this post’s title, from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

Questions!

Why don’t more people question things? What does it mean to question things? What kinds of things do we need to question? What kinds of answers do we hope to find from those questions? What sort of questions are we capable of answering? How do we answer the rest of the questions? Would it help if more people read books? Why does my generation, self included, insist of not reading books? Why do we insist on watching so much TV? Why do we insist on spending so much time on Twitter or Facebook? Why do I care so much how many likes or favs a picture or a post gets? What does it say about a society driven by that? Why are we so obsessed with amusing ourselves to death? Why are there so many damn photo sharing websites and todo applications? Is anybody even reading this? How do we make the world a better place? What does it mean to make the world a better place? Why do we think technology is the only way to accomplish this? Why are some people against technology? Do these people have good reasons for what they believe? Are we certain our reasons are better? Can we even know that for sure? What does it mean to know something for sure? Do computers cause more problems than they solve? Will the world be a better place if everyone learns to program? If we teach all the homeless people Javascript will they stop being homeless? What about the sexists and the racists and the fascists and the homophobes? Who else can help? How do we get all these people to work together? How do we teach them? How can we let people learn in better ways? How can we convince people to let go of their strategies adapted for the past and instead focus on the future? Why are there so many answers to the wrong questions?

Bicycle

The bicycle is a surprisingly versatile metaphor and has been on my mind lately. Here are all the uses I could think of of the bicycle as a metaphor used when talking about computing.

Perhaps the most famous use is by Steve Jobs, who apparently wanted to rename the Macintosh to “Bicycle”. Steve explains why in this video:

I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.

And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.

On the other end of the metaphor spectrum, we have Doug Engelbart, who believed powerful tools required powerful training for users to realize their full potential, but that the world is more satisfied with dumbed down tools:

[H]ow do you ever migrate from a tricycle to a bicycle? A bicycle is very unnatural and hard to learn compared to a tricycle, and yet in society it has superseded all the tricycles for people over five years old. So the whole idea of high-performance knowledge work is yet to come up and be in the domain. It’s still the orientation of automating what you used to do instead of moving to a whole new domain in which you are obviously going to learn quite a few new skills.

And again from Belinda Barnet’s Memory Machines:

[Engelbart]: ‘Someone can just get on a tricycle and move around, or they can learn to ride a bicycle and have more options.’

This is Engelbart’s favourite analogy. Augmentation systems must be learnt, which can be difficult; there is resistance to learning new techniques, especially if they require changes to the human system. But the extra mobility we could gain from particular technical objects and techniques makes it worthwhile.

Finally, perhaps my two favourite analogies come from Alan Kay. Much like Engelbart, Alan uses the bicycle as a metaphor for learning:

I think that if somebody invented a bicycle now, they couldn’t get anybody to buy it because it would take more than five minutes to learn, and that is really pathetic.

But Alan has a more positive metaphor for the bicycle (3:50), which gives me some hope:

The great thing about a bike is that it doesn’t wither your physical attributes. It takes everything you’ve got, and it amplifies that! Whereas an automobile puts you in a position where you have to decide to exercise. We’re bad at that because nature never required us to have to decide to exercise. […]

So the idea was to try to make an amplifier, not a prosthetic. Put a prosthetic on a healthy limb and it withers.

Humans Need Not Apply  

Here’s a light topic for the weekend:

This video combines two thoughts to reach an alarming conclusion: “Technology gets better, cheaper, and faster at a rate biology can’t match” + “Economics always wins” = “Automation is inevitable.”

This pairs well with the book I’m currently reading, Nick Bostrom’s Superintelligence and there’s an interesting discussion on reddit, too.

It’s important to remember even if this speculation is true and in the future humans are largely unemployable, that there are other things for a human to do than just work.

Documentation  

Dr. Drang:

There seems to be belief among software developers nowadays that providing instructions indicates a failure of design. It isn’t. Providing instructions is a recognition that your users have different backgrounds and different ways of thinking. A feature that’s immediately obvious to User A may be puzzling to User B, and not because User B is an idiot.

You may not believe this, but when the Macintosh first came out everything about the user interface had to be explained.

Agreed. Of course you have to have a properly labeled interface, but that doesn’t mean you can’t have more powerful features explained in documentation. The idea that everything should be “intuitive” is highly toxic to creating powerful software.

Jef Raskin on “Intuitive Interfaces”  

Jef Raskin:

My subject was an intelligent, computer-literate, university-trained teacher visiting from Finland who had not seen a mouse or any advertising or literature about it. With the program running, I pointed to the mouse, said it was “a mouse”, and that one used it to operate the program. Her first act was to lift the mouse and move it about in the air. She discovered the ball on the bottom, held the mouse upside down, and proceeded to turn the ball. However, in this position the ball is not riding on the position pick-offs and it does nothing. After shaking it, and making a number of other attempts at finding a way to use it, she gave up and asked me how it worked. She had never seen anything where you moved the whole object rather than some part of it (like the joysticks she had previously used with computers): it was not intuitive. She also did not intuit that the large raised area on top was a button.

But once I pointed out that the cursor moved when the mouse was moved on the desk’s surface and that the raised area on top was a pressable button, she could immediately use the mouse without another word. The directional mapping of the mouse was “intuitive” because in this regard it operated just like joysticks (to say nothing of pencils) with which she was familiar.

From this and other observations, and a reluctance to accept paranormal claims without repeatable demonstrations thereof, it is clear that a user interface feature is “intuitive” insofar as it resembles or is identical to something the user has already learned. In short, “intuitive” in this context is an almost exact synonym of “familiar.”

And

The term “intuitive” is associated with approval when applied to an interface, but this association and the magazines’ rating systems raise the issue of the tension between improvement and familiarity. As an interface designer I am often asked to design a “better” interface to some product. Usually one can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and ease of implementation it is superior to competing or the client’s own products. Even where my proposals are seen as significant improvements, they are often rejected nonetheless on the grounds that they are not intuitive. It is a classic “catch 22.” The client wants something that is significantly superior to the competition. But if superior, it cannot be the same, so it must be different (typically the greater the improvement, the greater the difference). Therefore it cannot be intuitive, that is, familiar. What the client usually wants is an interface with at most marginal differences that, somehow, makes a major improvement. This can be achieved only on the rare occasions where the original interface has some major flaw that is remedied by a minor fix.

Nobody knew how to use an iPhone before they saw someone else do it. There’s nothing wrong with more powerful software that requires a user to learn something.

The Wealth of Applications  

Graham Lee:

Great, so dividing labour must be a good thing, right? That’s why a totally post-Smith industry like producing software has such specialisations as:

full-stack developer

Oh, this argument isn’t going the way I want. I was kindof hoping to show that software development, as much a product of Western economic systems as one could expect to find, was consistent with Western economic thinking on the division of labour. Instead, it looks like generalists are prized.

On market demand:

It’s not that there’s no demand, it’s that the demand is confused. People don’t know what could be demanded, and they don’t know what we’ll give them and whether it’ll meet their demand, and they don’t know even if it does whether it’ll be better or not. This comic strip demonstrates this situation, but tries to support the unreasonable position that the customer is at fault over this.

Just as using a library is a gamble for developers, so is paying for software a gamble for customers. You are hoping that paying for someone to think about the software will cost you less over some amount of time than paying someone to think about the problem that the software is supposed to solve.

But how much thinking is enough? You can’t buy software by the bushel or hogshead. You can buy machines by the ton, but they’re not valued by weight; they’re valued by what they do for you. So, let’s think about that. Where is the value of software? How do I prove that thinking about this is cheaper, or more efficient, than thinking about that? What is efficient thinking, anyway?

I think you can answer this question if you frame most modern software as “entertainment” (or at least, Apps are Websites). It’s certainly not the case that all software is entertainment, but perhaps for the vast majority of people software as they know it is much closer to movies and television than it is to references or mental tools. The only difference is, software has perhaps completed the ultimate wet dream of the entertainment market in that Pop Software doesn’t even really have personalities like music or TV do — the personality is solely that of the brand.

Off and On

I started working on a side project in January 2014 and like many of my side projects over the years, after an initial few months of vigorous work, the last little while has been mostly off and on work on the project.

The typical list of explanations applies: work gets in the way (work has been a perpetual crunch mode for months now), the project has reached a big enough size that it’s hard to make changes (I’m on an unfamiliar platform), and I’m stuck at a particularly difficult problem (I saved the best for last!).

Since the summer has been more or less fruitless while working on this project, I’m taking a different approach going forward, one I’ve used to some success in the past. It comes down to three main things:

  1. Focus the work to one hour per day, usually in the morning. This causes me to get at least something done once every day, even if it’s just small or infrastructure work. I’ve found limiting to a small amount of time (two hours works well too) also forces me to not procrastinate or get distracted while I’m working. The hour of side project becomes precious and not something to waste.

  2. Stop working when you’re in the middle of something so you have somewhere to ramp up with next time you start (I’m pretty sure this one is cribbed directly from Ernest Hemingway).

  3. Keep a diary for your work. I do this with most of my projects by just updating a text file every day after I’m finished working with what my thoughts were for the day. I usually write about what I worked on and what I plan on working on for the next day. This complements step 2 because it lets me see where I left off and what I was planning on doing. It also helps bring any subconscious thoughts about the project into the front of my brain. I’ll usually spend the rest of the day thinking about it, and I’ll be eager to get started again the next day (which helps fuel step 1, because I have lots of ideas and want to stay focused on them — it forces me to work better to get them done).

That, and I’ve set a release date for myself, which should hopefully keep me focused, too.

Here goes.

Transliterature, a Humanist Design  

Ted Nelson:

You have been taught to use Microsoft Word and the World Wide Web as if they were some sort of reality dictated by the universe, immutable “technology” requiring submission and obedience.

But technology, here as elsewhere, masks an ocean of possibilities frozen into a few systems of convention.

Inside the software, it’s all completely arbitrary. Such “technologies” as Email, Microsoft Windows and the World Wide Web were designed by people who thought those things were exactly what we needed. So-called “ICTs"– “Information and Communication Technologies,” like these– did not drop from the skies or the brow of Zeus. Pay attention to the man behind the curtain! Today’s electronic documents were explicitly designed according to technical traditions and tekkie mindset. People, not computers, are forcing hierarchy on us, and perhaps other properties you may not want.

Things could be very different.

JUMP Math: Multiplying Potential (PDF)  

John Mighton:

Research in cognitive science suggests that, while it is important to teach to the strengths of the brain (by allowing students to explore and discover concepts on their own), it is also important to take account of the weaknesses of the brain. Our brains are easily overwhelmed by too much new information, we have limited working memories, we need practice to consolidate skills and concepts, and we learn bigger concepts by first mastering smaller component concepts and skills.

Teachers are often criticized for low test scores and failing schools, but I believe that they are not primarily to blame for these problems. For decades teachers have been required to use textbooks and teaching materials that have not been evaluated in rigorous studies. As well, they have been encouraged to follow many practices that cognitive scientists have now shown are counterproductive. For example, teachers will often select textbooks that are dense with illustrations or concrete materials that have appealing features because they think these materials will make math more relevant or interesting to students. But psychologists such as Jennifer Kaminski have shown that the extraneous information and details in these teaching tools can actually impede learning.

(via @mayli)

Idle Creativity  

Andy Matuschak (in 2009):

When work piles up, my brain doesn’t have any idle cycles. It jumps directly from one task to another, so there’s no background processing. No creativity! And it feels like all the color and life has been sucked out of the world.

I don’t mind being stressed or doing lots of work or losing sleep, but I’ve been noticing that I’m a boring person when it happens!

The Colour and the Shape: Custom Fonts in System UI  

Dave Wiskus:

When the user is in your app, you own the screen.

…Except for the status bar — that’s Helvetica Neue. And share sheets. And Alerts. And in action sheets. Oh, and in the swipe-the-cell UI in iOS 8. In fact any stock UI with text baked in is pretty much going to use Helvetica Neue in red, black, and blue. Hope you like it.

Maybe this is about consistency of experience. Perhaps Apple thinks that people with bad taste will use an unreadable custom font in a UIAlert and confuse users.

I agree with Dave the lack of total control is vexing, but I think that’s because with these system features, alert views and the status bar, Apple wants us to treat them more or less like hardware. They’re immutable, they “come from the OS” like they’re appearing on Official iOS Letterhead paper.

This is why I think Apple doesn’t want us customizing these aspects of iOS. They want to keep the “official” bits as untampered with as possible.

Apps are Websites

The Apple developer community is atwitter this week about independent developers and whether or not they can earn a good living working independently on the Mac and or iOS platforms. It’s a great discussion about an unfortunately bleak topic. It’s sad to hear so many great developers, working on so many great products, are doing so poorly from it. And it seems like a lot of it is mostly out of their control (if I thought I knew a better way, I’d be doing it!). David Smith summarizes most of the discussion (with an awesome list of links):

It has never been easy to make a living (whatever that might mean to you) in the App Store. When the Store was young it may have been somewhat more straightforward to try something and see if it would hit. But it was never “easy”. Most of my failed apps were launched in the first 3 years of the Store. As the Store has matured it has also become a much more efficient marketplace (in the economics sense of market). The little tips and tricks that I used to be able to use to gain an ‘unfair’ advantage now are few and far between.

The basic gist seems to be “it’s nearly impossible to make a living off iOS apps and it’s possible but still pretty hard to do off OS X.” Most of us I think would tend to agree you can charge more for OS X software than you can for iOS because OS X apps are usually “bigger” or more fleshed out, but I think that’s only half the story.

The real reason why it’s so hard to sell iOS apps is that iOS apps are really just websites. Implementation details aside, 95 per cent of people think of iOS apps the same way they think about websites. Websites that most people are exposed to are mostly promotional, ad-laden and most importantly, free. Most people do not pay for websites. A website is just something you visit and use, but it isn’t a piece of software, and this is the exact same way they think of and treat iOS apps. That’s why indie developers are having such a hard time making money.

(Just so we’re clear, I’ve been making iOS apps the whole duration of the App Store and I know damn well iOS apps are not “websites.” I’m well aware they are contained binaries that may-or-may-not use the internet or web services. I’m talking purely about perceptions here)

For a simple test, ask any of your non-developer friends what the difference between an “app” and an “application” or “program” is and I’d be willing to bet they think of them as distinct concepts. To most people, “apps” are only on your phone or tablet, and programs are bigger and on your computer. “Apps” seem to be a wholly different category of software from programs like Word or Photoshop, and the idea that Mac and iOS apps are basically the same on the inside doesn’t really occur to people (nor does it need to, really). People “know” apps aren’t the same thing as programs.

Apps aren’t really “used” so much as they are “checked” (how often do people “check Twitter” vs “use Twitter”?) which is usually a brief “visit” measured in seconds (of, ugh, “engagement”). Most apps are used briefly and fleetingly, just like most websites. iOS, then, isn’t so much an operating system but a browser and the App Store its crappy search engine. Your app is one of limitless other apps just like your website is one of limitless other websites too. The ones people have heard of are promoted and advertised, or the ones in their own niches.

I don’t know how to make money in the App Store, but if I had to I’d try to learn from financially successful websites. I’d charge a subscription and I’d provide value. I’d make an app that did something other than have a “feed” or a “stream” or “shared moments.” I’d make an app that help people create or understand. I’d try new things.

I couldn’t charge $50 for an “app” because apps are perceived as not having that kind of value which I have to agree with (I know firsthand how much works goes in to making an app, but that doesn’t make the app valuable), so maybe we need to create a new category of software on iOS, one that breaks out of the “app” shell (and maybe breaks out of the moniker, too). I don’t know what that entails, but I’m pretty sure that’s what we need.

More Inspiration from Magic Ink and Cortex  

Because I’ll never stop linking to Magic Ink:

The future will be context-sensitive. The future will not be interactive.

Are we preparing for this future? I look around, and see a generation of bright, inventive designers wasting their lives shoehorning obsolete interaction models onto crippled, impotent platforms. I see a generation of engineers wasting their lives mastering the carelessly-designed nuances of these dead-end platforms, and carelessly adding more. I see a generation of users wasting their lives pointing, clicking, dragging, typing, as gigahertz processors spin idly and gigabyte memories remember nothing. I see machines, machines, machines.

Whither Cortex

In my NSNorth 2013 talk, An Educated Guess (which was recorded on video but as of yet has not been published) I gave a demonstration of a programming tool called Cortex and made the rookie mistake of saying it would be on Github “soon.” Seeing as July 2014 is well past “soon,” I thought I’d explain a bit about Cortex and what has happened since the first demonstration.

Cortex is a framework and environment for application programs to autonomously exchange objects without having to know about each other. This means, a Calendar application can ask the Cortex system for objects with a Calendar type and receive a collection of objects with dates. These Calendar objects come from other Cortex-aware applications on the system, like a Movies app, or a restaurant webpage, or a meeting scheduler. The Calendar application knows absolutely nothing about these other applications, all it knows is it wants Calendar objects.

Cortex can be thought of a little bit like Copy and Paste. With Copy and Paste, the user explicitly copies a selected object (like a selection of text, or an image from a drawing application) and then explicitly pastes what they’ve copied into another application (like an email application). In between the copy and paste is the Pasteboard. Cortex is a lot like the Pasteboard, except the user doesn’t explicitly copy or paste anything. Applications themselves either submit objects or request objects.

This, of course, results in quite a lot of objects in the system, so Cortex also has a method of weighing the objects by relevance so nobody is overwhelmed. Applications can also provide their own ways of massaging the objects in the system to create new objects (for example, a “Romantic Date” plugin might lookup objects of the Movie Showing type and the Restaurant type, and return objects of the Romantic Date type to inquiring applications).

If this sounds familiar, it’s because it was largely inspired by part of a Bret Victor paper with lots of other bits mixed in from my research for the NSNorth talk (especially Bush’s Memex and Engelbart’s NLS)).

This is the sort of system I’ve alluded to before on my website (for example, in Two Apps at the Same Time and The Programming Environment is the Programming Language, among others).

Although the system I demonstrated at NSNorth was largely a technical demo, it was nonetheless pretty fully featured and to my delight, was exceptionally well-received by those in the audience. For the rest of the conference, I was approached by many excited developers eager to jump in and get their feet wet. Even those who were skeptical were at least willing to acknowledge, despite its shortcomings, the basic premise of applications sharing data autonomously is a worthwhile endeavour.

And so here I am over a year later with Cortex still locked away in a private repository. I wish I could say I’ve been working on it all this time and it’s ten times more amazing than what I’d originally demoed but that’s just not true. Cortex is largely unchanged and untouched since its original showing last year.

On the plane ride home from NSNorth, I wrote out a to-do list of what needed to be done before I’d feel comfortable releasing Cortex into the wild:

  1. Writing a plugin architecture. The current plan is to have the plugins be normal cocoa plugins which will be run by an XPC process. That way if they crash they won’t bring down the main part of the system. This will mean the generation of objects is done asynchronously, so care will have to be taken here.

  2. A story for debugging Cortex plugins. It’s going to be really tricky to debug these things, and if it’s too hard then people just aren’t going to develop them. So it has to be easy to visualize what’s going on and easy to modify them. This might mean not using straight compiled bundles but instead using something more dynamic. I have to evaluate what that would mean for people distributing their own plugins, if this means they’d always have to be released in source form.

  3. How are Cortex plugins installed? The current thought is to allow for an install message to be sent over the normal cortex protocol (currently http) and either send the plugin that way (from a hosting application) or cause Cortex itself to then download and install the plugin from the web.

    How would it handle uninstalls? How would it handle malicious plugins? It seems like the user is going to have to grant permission for these things.

  4. Relatedly, should there be a permissions system for which apps can get/submit which objects for the system. Maybe we want to do just “read and or write” permissions per application.

The most important issue then, and today, is #2. How are you going to make a Cortex component (something that can create or massage objects) without losing your mind? Applications are hard to make, but they’re even harder to make when we can’t see our data. Since Cortex revolves around data, in order to make anything useful with it, programmers need to be able to see that data. Programmers are smart, but we’re also great at coping with things, with juuuust squeaking by with the smallest amount of functionality. A programmer will build-run-kill-change-repeat an application a thousand times before stopping and taking the time to write a tool to help visualize.

I do no want to promote this kind of development style with Cortex and until I can find a good solution (or be convinced otherwise) I don’t think Cortex would do anything but languish in public. If this sounds like an interesting problem to you, please do get in touch.

Wil Shipley on Automated Testing  

Classic Shipley:

But, seriously, unit testing is teh suck. System testing is teh suck. Structured testing in general is, let’s sing it together, TEH SUCK.

“What?!!” you may ask, incredulously, even though you’re reading this on an LCD screen and it can’t possibly respond to you? “How can I possibly ship a bug-free program and thus make enough money to feed my tribe if I don’t test my shiznit?”

The answer is, you can’t. You should test. Test and test and test. But I’ve NEVER, EVER seen a structured test program that a) didn’t take like 100 man-hours of setup time, b) didn’t suck down a ton of engineering resources, and c) actually found any particularly relevant bugs. Unit testing is a great way to pay a bunch of engineers to be bored out of their minds and find not much of anything. [I know – one of my first jobs was writing unit test code for Lighthouse Design, for the now-president of Sun Microsystems.] You’d be MUCH, MUCH better offer hiring beta testers (or, better yet, offering bug bounties to the general public).

Let me be blunt: YOU NEED TO TEST YOUR DAMN PROGRAM. Run it. Use it. Try odd things. Whack keys. Add too many items. Paste in a 2MB text file. FIND OUT HOW IT FAILS. I’M YELLING BECAUSE THIS SHIT IS IMPORTANT.

Most programmers don’t know how to test their own stuff, and so when they approach testing they approach it using their programming minds: “Oh, if I just write a program to do the testing for me, it’ll save me tons of time and effort.”

There’s only three major flaws with this: (1) Essentially, to write a program that fully tests your program, you need to encapsulate all of your functionality in the test program, which means you’re writing ALL THE CODE you wrote for the original program plus some more test stuff, (2) YOUR PROGRAM IS NOT GOING TO BE USED BY OTHER PROGRAMS, it’s going to be used by people, and (3) It’s actually provably impossible to test your program with every conceivable type of input programmatically, but if you test by hand you can change the input in ways that you, the programmer, know might be prone to error.

Sing it.

Doomed to Repeat It  

A mostly great article by Paul Ford about the recycling of ideas in our industry:

Did you ever notice, wrote my friend Finn Smith via chat, how often we (meaning programmers) reinvent the same applications? We came up with a quick list: Email, Todo lists, blogging tools, and others. Do you mind if I write this up for Medium?

I think the overall premise is good but I do have thoughts on some of it. First, he claims:

[…] Doug Engelbart’s NLS system of 1968, which pioneered a ton of things—collaborative software, hypertext, the mouse—but deep, deep down was a to-do list manager.

This is a gross misinterpretation of NLS and of Engelbart’s motivations. While the project did birth some “productivity” tools, it was much more a system for collaboration and about Augmenting Human Intellect. A computer scientist not understanding Engelbart’s work would be like a physicist not understanding Isaac Newton’s work.

On to-do lists, I think he gets closest to the real heart of what’s going on (emphasis mine):

The implications of a to-do list are very similar to the implications of software development. A task can be broken into a sequence, each of those items can be executed in turn. Maybe programmers love to do to-do lists because to-do lists are like programs.

I think this is exactly it. This is “the medium is the message” 101. Of course programmers are going to like sequential lists of instructions, it’s what they work in all day long! (Exercise for the reader: what part of a programmer’s job is like email?)

His conclusion is OK but I think misses the bigger cause:

Very little feels as good as organizing all of your latent tasks into a hierarchical lists with checkboxes associated. Doing the work, responding to the emails—these all suck. But organizing it is sweet anticipatory pleasure.

Working is hard, but thinking about working is pretty fun. The result is the software industry.

The real problem is in those very last words, software industry. That’s what we do, we’re an industry but we pretend to be, or at least expect, a field [of computer science]. Like Alan Kay says, computing isn’t really a field but a pop culture.

It’s not that email is broken or productivity tools all suck; it’s just that culture changes. People make email clients or to-do list apps in the same way that theater companies perform Shakespeare plays in modern dress. “Email” is our Hamlet. “To-do apps” are our Tempest.

Culture changes but mostly grows with the past, whereas pop culture takes almost nothing from the past and instead demands the present. Hamlet survives in our culture by being repeatedly performed, but more importantly it survives in our culture because it is studied as a work of art. The word “literacy” doesn’t just mean reading and writing, it also implies having a body of work included and studied by a culture.

Email and to-do apps aren’t cultural in this sense because they aren’t treated by anyone as “great works,” they aren’t revered or built-upon. They are regurgitated from one generation to the next without actually being studied and improved upon. Is it any wonder mail apps of yesterday look so much like those of today?

Step Away from the Kool-Aid  

Ben Howell on startups and compensation:

Don’t, under any circumstances work for less than market rate in order to build other peoples fortunes. Simply don’t do it. Cool product that excites you so in-turn you’ll work for a fraction of the market rate? Call that crap out for what it is. A CEO of a company asking you to help build his fortune while at the same time returning you squat.

String Constants  

Brent Simmons on string constants:

I know that using a string constant is the accepted best practice. And yet it still bugs me a little bit, since it’s an extra level of indirection when I’m writing and reading code. It’s harder to validate correctness when I have to look up each value — it’s easier when I can see with my eyes that the strings are correct.[…]

But I’m exceptional at spotting typos. And I almost never have cause to change the value of a key. (And if I did, it’s not like it’s difficult. Project search works.)

I’m not going to judge Brent here on his solution, but it seems to me like this problem would be much better solved by using string constants provided Xcode actually showed you the damn values of those constants in auto-complete.

When developers resort to crappy hacks like this, it’s a sign of a deficiency in the tools. If you find yourself doing something like this, you shouldn’t resort to tricks, you should say “I know a computer can do this for me” and you should demand it. (rdar://17668209)

Remote Chance

I recently stumbled across an interesting 2004 project called Glancing, whose basic principle is that of replicating the subtle social cues of personal, IRL office relationships like eye contact, nodding, etc. but for people using computers not in the same physical location.

The basic gist (as I understand it) is people, when in person, don’t merely start talking to one another but first have an initial conversation through body language. We glance at each other and try to make eye contact before actually speaking, hoping for the glance to be reciprocated. In this way, we can determine whether or not we should even proceed with the conversation at all, or if maybe the other person is occupied. Matt Webb’s Glancing exists as a way to bridge that gap with a computer (read through his slide notes, they’re detailed but aren’t long). You can look up at your screen and see who else has recently “looked up” too.

Remote work is a tricky problem to solve. We do it occasionally at Hopscotch when working from home, and we’re mostly successful at it, but as a friend of mine recently put it, it’s harder to have a sense of play when experimenting with new features. There is an element of collaboration, of jamming together (in the musical sense) that’s lacking when working over a computer.

Maybe there isn’t really a solution to it and we’re all looking at it the wrong way. Telecommuting has been a topic of research and experimentation for decades and it’s never fully taken off. It’s possible, like Neil Postman suggests in Technopoly that ours is a generation that can’t think of a solution to a problem outside of technology and that maybe this kind of collaboration isn’t compatible with technology. I see that as a possibility.

But I also think there’s a remote chance we’re trying to graft on collaboration as an after-the-fact feature to non-collaborative work environments. I work in Xcode and our designer works in Sketch, and when we collaborate, neither of our respective apps are really much involved. Both apps are designed with a single user in mind. Contrast this with Doug Engelbart and SRI’s NLS system, built from the ground up with multi-person collaboration in mind, and you’ll start to see what I mean.

NLS’s collaboration features seem, in today’s world at least, like screen sharing with multiple cursors. But it extends beyond that, because the whole system was designed to support multiple people using it from the get-go.

How do we define play, how do we jam remotely with software?

What Do We Save When We Save the Internet?  

Ian Bogost in a blistering look at today’s internet and Net Neutrality:

“We believe that a free and open Internet can bring about a better world,” write the authors of the Declaration of Internet Freedom. Its supporters rise up to decry the supposedly imminent demise of this Internet thanks to FCC policies poised to damage Network Neutrality, the notion of common carriage applied to data networks.

Its zealots paint digital Guernicas, lamenting any change in communication policy as atrocity. “If we all want to protect universal access to the communications networks that we all depend on to connect with ideas, information, and each other,” write the admins of Reddit in a blog post patriotically entitled Only YOU Can Protect Net Neutrality, “then we must stand up for our rights to connect and communicate.”

[…]

What is the Internet? As Evgeny Morozov argues, it may not exist except as a rhetorical gimmick. But if it does, it’s as much a thing we do as it is an infrastructure through which to do it. And that thing we do that is the Internet, it’s pockmarked with mortal regret:

You boot a browser and it loads the Yahoo! homepage because that’s what it’s done for fifteen years. You blink at it and type a search term into the Google search field in the chrome of the browser window instead.

Sitting in front of the television, you grasp your iPhone tight in your hand instead of your knitting or your whiskey or your rosary or your lover.

The shame of expecting an immediate reply to a text or a Gchat message after just having failed to provide one. The narcissism of urgency.

The pull-snap of a timeline update on a smartphone screen, the spin of its rotary gauge. The feeling of relief at the surge of new data—in Gmail, in Twitter, in Instagram, it doesn’t matter.

The gentle settling of disappointment that follows, like a down duvet sighing into the freshly made bed. This moment is just like the last, and the next.

You close Facebook and then open a new browser tab, in which you immediately navigate back to Facebook without thinking.

The web is a brittle place, corrupted by advertising and tracking (see also “Is the Web Really Free?”). I won’t spoil the ending but I’m at least willing to agree with his conclusion.