Speed of Light

Swift Mailing Lists are Self-Selecting

When Swift went open source, Apple also open sourced the process by which Swift changes: the swift-evolution mailing list and corresponding git repository. This is huge because not only does the team share what they plan to do to Swift in the future, they’re also actively asking for feedback from and talking with the greater developer community about it.

Apple clearly wants Swift to be the programming language for the coming decades, not just for its own platforms, but for systems and application programming everywhere. And so to help it achieve this, Apple is open and listening for feedback on the mailing lists.

There is one problem with this though: the mailing list is self-selecting. Generally speaking, if you’re an active participant of the the swift-evolution list, you’re already quite bought in on Swift. This is generally a good thing, since being bought in on Swift means you care a lot about its future, and you have a lot of context from working with current Swift, too.

But by only getting suggestions and feedback about Swift’s evolution from those bought in on Swift, the Swift team is largely neglecting those who are not already on board with Swift. This includes those holding out with Objective C on Apple’s platforms, and those using different languages on other platforms too. These groups have largely no influence on Swift’s future.

Let’s take a recent example: the “Swiftification” of Cocoa APIs. The basic premise, “Cocoa, the Swift standard library, maybe even your own types and methods—it’s all about to change” might be good for Swift programmers, but I imagine stuff like this is the exact reason many Objective C programmers avoid Swift in the first place: they quite like how Objective C APIs read. Although the Swift team is actively asking for feedback, my guess is many Objective C developers’ reactions to this would be immediate rejection. And that sort of thing doesn’t bode well for providing feedback on swift-evolution. And thus, Swift becomes less likeable to these Objective C developers.

What to do? I’m not entirely sure, but the first thing that springs to mind isn’t a swift-evolution, but an apple-platforms-evolution list. If Apple eventually wants to get Objective C programmers using Swift, I think the discussions about Swift’s future needs to better include them.

Grandpa! You Can’t Say That!

When I was a kid, I distinctly remember saying “Grandpa! You can’t say that!” whenever my grandfather would say something a little…dated. It wasn’t that he was a bad person, just that he came from a time when certain remarks (about race, gender, ability, etc) weren’t considered inappropriate by his social groups. But they had “become” inappropriate to my ears (I’m not trying to excuse anything he’d said, just trying to explain that to him, there was nothing wrong with the words he used).

The thing is, I know there are things we say today that future children will scoff at. They’ll tell me “you can’t say that!” and I’m trying to keep my mind open so that if it happens, I can learn from it.

But I want to get a head start on that now. I’m trying to figure out the words and phrases that’ll become socially unacceptable to use in the future so I can stop using them now. And of course, if I think something will be inappropriate in the future, that’s a pretty strong indicator it is inappropriate already. I’ll also note this goes far deeper than just the words I choose to use: there are entire mindsets that go behind these words as well, and I want to steer my mind well away from them.

So here’s my candidate list of things I shouldn’t say (which I’ve already been in process of removing from my language), in no particular order:

  • “Third World” From Wikipedia, “The term Third World arose during the Cold War to define countries that remained non-aligned with either NATO, or the Communist Bloc.” This seems like a pretty good candidate for not saying. The “third world” seems to describe a destitute world unworthy of discussion; total othering. Try suggestions from this article instead?

  • “You guys” Using ‘guys’ as a generic plural word for a bunch of people of any gender. There are numerous problems describing a group of mix-gendered people as ‘guys’ but I think there’s also a problem of describing a group of all male people as ‘guys’ because it reinforces male as the norm. Even a statement like “the guys on the Khan Academy iOS dev team,” while true we all identify as male, using the term “guys” reinforces the idea that we should be all guys. Try “folks,” “team,” “comrades,” or simply “people” instead?

  • “That’s crazy!” I’ve linked to this great post by Ash Furrow before about using words associated with mental illness to describe something unbelievable. This is pretty problematic and unempathetic language. Try “ridiculous,” “outrageous,” or “magical” instead?

These are the ones which immediately sprang to mind and which I’m already trying to remove from my lexicon. They may not be widely socially unacceptable yet, but I’m pretty sure they will be soon enough (this is a good thing!).

What’s on your list? What do you think your grandchildren would scold you for saying that you say now?

The Weirdest Kind of Procrastination

Here’s a weird thing I do: sometimes I procrastinate doing very small things, for no real reason at all.

I’ve read lots about procrastination (which is a fun way to procrastinate, by the way), and in general, it seems we procrastinate because distractions are so much more instantly gratifying and pleasurable than doing the Tough Work we need to get done. That makes a certain amount of sense and I definitely battle with this (hello Twitter, Instagram, Tumblr, and the whole goddamn internet today). But the kind of procrastination I’m talking about is different.

Sometimes I’ll have a small task to do, like “message my friend and tell him we’ll meet him at 7 for dinner.” That task couldn’t be easier. Look, he’s even online right now. Just, message him! And yet, oddly, my brain says “oh this is so easy, I can just do it later, no rush no worries.” Or maybe I’m writing some code and I get a compile error that I’m missing a character. It’s a one letter fix, but I say “oh that’s easy, I can fix that in a little bit” and I go do something else to bide my time.

I don’t entirely know why I do this. At some level, my brain knows the thing to do is easy, but it won’t do it. It kind of feels like my brain thinks because the task is so easy, it’s not really worth doing at all?

It kind of reminds me of this TED talk by Derek Silvers, where he says “telling someone your goal makes you less likely to do it.” Maybe that’s it? Maybe my brain thinks a task is so easy that it’s already done?

No, UI is the New UI

Today I stumbled across “No UI is the New UI,” an article extolling the upcoming demise of the traditional Graphical User Interface, in favour of text messaging interfaces like Facebook M and “Magic.”

Tony Aubé writes:

The rise in popularity of these apps recently brought me to a startling observation : advances in technology, especially in AI, are increasingly making traditional UI irrelevant. As much as I dislike it, I now believe that technology progress will eventually make UI a tool of the past, something no longer essential for Human-Computer interaction. And that is a good thing.

While I think conversational interfaces, whether powered by natural or artificial intelligence, have a long and prosperous life ahead of them, I don’t think they should replace traditional interfaces in most cases.

Conversation is fantastic for certain tasks, like requesting or negotiating certain broad information. Negotiating with software about which restaurant you’d like delivery from is probably much nicer than filling out a web form, but you can do so much more with a computer.

(Also, have you noticed how many of these conversational UI products in North America are really really first-world-problemy? Arrange my flights, send me food, clean my house. Oof!)

He continues,

As a designer, this is an unsettling trend to internalize. In a world where computer can see, listen, talk, understand and reply to you, what is the purpose of a user interface? Why bother designing an app to manage your bank account when you could just talk to it directly?

I’ll tell you what the purpose of a user interface is: it’s to provide much richer information, most of it visually, and to allow for deeper interaction with the thing you’re trying to understand. Let’s look at both of these:

The most important part about the Graphical User Interface is that it’s graphical. The eye is crazy fast at soaking up information. The eye can see shapes and colours, can determine hierarchies of importance and can compare choices like nobody’s business. We also have a type of work dedicated to they eye that’s been worked on and studied for centuries, called Graphic Design.

For spoken-word conversational UIs, you get one morsel of information after another. You can’t go back, you can’t make comparisons, you’ve got to remember it all. Everything must be described. From this mode, “huge” and “tiny” are arbitrary sounds whose meanings don’t seem as different as they are meant to be.

For written-word conversation UIs, the information is a little bit spread out, you can technically read backwards, but you’re still left with arbitrary symbols (we know them as letters and numbers) trying to relay information.

That’s the first part, the visual richness of graphics. The second part is how we humans interact back with the computer. In the graphical user interface, we have pointing devices (mice, fingers, pencils, etc) for indicating our interest and for exploring information. This allows us to directly manipulate the thing we care about. We can point at them, select them, move them, apply things to them. The list goes on. This lets us arbitrarily manipulate things in space, whereas with conversational UIs we end up needing to manipulate things in time, like we do word after word and sentence after sentence.

The overarching theme here isn’t that graphics are better than text, or buttons are better than SMS, it’s that these different interfaces force us to think in very different ways.

We change with our tools, with the media we use for communication. We communicate with one another, yes, but we also communicate with ourselves. The representations we choose help us think. The graphical user interface as we know it today is far from perfect. It will continue to change, but graphics are here to stay as an important part of our society.


See also Bret Victor’s lil post about graphics and computers.

MVVM is Not Good and Not MVVM is Good

Last week, Soroush Khanlou published “MVVM is Not Very Good,” wherein he describes the “Model-View-ViewModel” pattern as the iOS Community defines it, and considers it:

an anti-pattern that confuses rather than clarifies. View models are poorly-named and serve as only a stopgap in the road to better architecture. Our community would be better served moving on from the pattern.

Today, Ash Furrow published his thoughts on the matter in “MVVM is Exceptionally OK,” wherein he agrees with the main tenets of Soroush’s article, and suggests some modifications to the pattern:

MVVM is poorly named. Why don’t we rename it? Great idea. MVVM is a pretty big “umbrella term”, and precise language would help beginners get started.

Both articles, I think, are really arguing the same things, but maybe from different directions:

  1. Plain old MVC is leads to huge, disorganized, unmaintainable code (most of it in a UIViewController).
  2. We need better ways of organizing our code.
  3. MVVM, as the iOS Community interprets it, is not really good enough.

Of the defences I’ve seen for MVVM, almost all of them suggest with some tweaks, MVVM is actually a good thing. I think, broadly, this is true, but you can’t really change MVVM without it becoming something that is no longer MVVM; the original MVVM remains, as Soroush argued, “not very good.”

What’s important to remember is the point of things like MVC and MVVM in the first place. In our industry we tend to call these patterns, but I prefer to call them perspectives: they are mental tools which allow you to see a problem from a different point of view. It’s the old “when all you have is a hammer, everything looks like a nail” routine, where the joke is your hammer is the only perspective you have on the world. The solution to seeing everything through the eyes of a hammer isn’t to start seeing everything through the eyes of a screwdriver, or a paintbrush, but instead to see things from many perspectives.

Programming is a dark goop, hard to see under just one light. But from that goop also emerge new ways to see. Every time you form a new abstraction, no matter how small the method or how collosal the class, you build for yourself a new point of view.

MVC and MVVM are but two narrow views on what we know as programming, a confusing mess in desperate need of more light.

Writing About Programming is Hard

This morning I tweeted

Writing is hard. Programming is hard. But writing about programming?

OK, still very hard.

The general way to interpret this is “writing about programming topics is hard” (OOP vs Functional? Swift vs Objective C? Should you use goto?) and yes, it’s very hard to write about those things! (how do you talk about a program? how do you choose which parts to show? why can’t a reader execute and explore my program?) But this road is at least well travelled, and I feel like I can do so decently, myself.

But harder for me, at least, is writing about programming itself. What is programming? Should everyone learn how to program? What does it mean to learn programming? Is that learning a given language? Is that learning about if statements and map? Is that learning about algorithms? Is that learning about git? Should we treat existing languages as static? Are they all there is to programming?

Is programming for software developers? Is programming a way of undertanding problems or a way of causing them?


I’ve worked on programming environments at Hopscotch and Khan Academy. I’ve been through my share of “Hour of Code” ritualized learnings. But I still haven’t found answers to any of those questions.


Last month I took fun, but fruitless step towards figuring out some of those questions, and maybe some of those answers. The talk I gave at Brooklyn Swift was a blast, but I think I left with more questions for myself then when I started (appoligies and thanks to the audience for any bewilderment they certainly experienced).

The video for that talk is forthcoming, but I’m also trying to refine these ideas in a more presentable, readable format. I may not have the right answers, but I promise to at least ask to right questions.

Keeping Your Classes Shorter Than 250 Lines  

Soroush Khanlou:

One way that Backchannel’s SDK maintains its readability is through simplicity. No class in Backchannel is longer than 250 lines of code. When I was newer at programming, I thought that Objective-C’s nature made it hard to write short classes, but as I’ve gotten more experience, I’ve found that the problem was me, rather than the language. True, UIKit doesn’t do you any favors when it comes to creating simplicity, and that’s why you have to enforce it yourself.

This is a fantastic guideline and a wonderful post. I stick to a similar guideline in my code and had been wanting to write an article about it for a while. Soroush saved me the trouble.

The exact number isn’t what’s important here (my guideline is to keep Swift files under 100 lines), what’s important is giving yourself a metric, a general feeling for when a piece (class, struct, enum, whatever) of your code gets too big. For me, when a piece hits about a hundred lines it’s generally time for me to start breaking things out into smaller pieces.

A hundred lines or less (including doc strings, liberal whitespace, and no functional funnybusiness) keeps my code well structured and highly readable.

1000 Books

A year ago I gave myself a challenge: read a thousand books in my lifetime. I decided to start counting books I’d read since November 14, 2014 (although I’d read many books before this, I really only wanted to start counting then, so I could better catalogue them).

I’ve completely fallen in love with reading since finishing university. I always had lots of books around as a kid, but I think I enjoyed collecting them more than I enjoyed reading them. And having them imposed on me by school didn’t help either. So I didn’t do much pleasure reading throughout my school years.

A few things changed when I got out of school. For starters, nobody was telling me what I had to read, so I could do what I wanted. Secondly, my then-girlfriend (now wife, yay!) is an avid reader, and that encouraged me to do the same. Finally, I was starting to read more and more interesting things in Computer Science and realized if I wanted to do anything interesting in my career, it’d probably help me to be well read. I felt like I had a lot of catching up to do, but over the past few years, my newfound love of reading has been a fulfilling experience.

A thousand books is a lot for me. I’m not a super fast reader, but it seemed like a good goalpost to work towards in life. If I were to read a book a week (roughly 50 per year), it’d take me about 20 years to read a thousand. No easy feat! This year I managed 24, about two a month on average. Some are graphic novels which are easy for me to zip through, while others are dense books on education theory, which tend to be a slog. Most are comfortably in between, but nearly all of them have been enjoyable.

The books are, in order:

  • Mindstorms by Seymour Papert
  • Brave New World by Aldous Huxley
  • Toward a Theory of Instruction by Jerome Bruner
  • Changing Minds by Andrea diSessa
  • How to do Things with Video Games by Ian Bogost
  • The Circle by Dave Eggers
  • Annotated Declaration of Independence + US Constitution by Richard Beeman
  • Experience & Education by John Dewey
  • A Brief History of Time by Stehpen Hawking
  • Best American Comics 2014 by Scott McCloud
  • Making Comics by Scott McCloud
  • Shades of Grey by Jasper Fforde
  • A Theory of Fun for Game Design by Raph Koster
  • The Sixth Extinction by Elizabeth Kolbert
  • Sirens of Titan by Kurt Vonnegut
  • Geeks Bearing Gifts by Ted Nelson
  • The End of Education by Neil Postman
  • Thinking Fast and Slow by Daniel Kahneman
  • Ode to Kirihito Part 1 by Osamu Tezuka
  • Scott Pilgrim Volume 1 by Brian Lee O’Mally
  • The Educated Mind by Kieran Egan
  • Dr. Slump vol 4 by Akira Toriyama
  • Maps of the Imagination by Peter Turchi
  • Headstrong: 52 Women Who Changed Science-and the World by Rachel Swaby

Here’s to the next 976!

Modern Medicine  

A great essay by Jonathan Harris on how software shapes us, and the responsibilities latent in our industry:

We inhabit an interesting time in the history of humanity, where a small number of people, numbering not more than a few hundred, but really more like a few dozen, mainly living in cities like San Francisco and New York, mainly male, and mainly between the ages of 22 and 35, are having a hugely outsized effect on the rest of our species.

Through the software they design and introduce to the world, these engineers transform the daily routines of hundreds of millions of people. Previously, this kind of mass transformation of human behavior was the sole domain of war, famine, disease, and religion, but now it happens more quietly, through the software we use every day, which affects how we spend our time, and what we do, think, and feel.

In this sense, software can be thought of as a new kind of medicine, but unlike medicine in capsule form that acts on a single human body, software is a different kind of medicine that acts on the behavioral patterns of entire societies.

The designers of this software call themselves “software engineers”, but they are really more like social engineers.

Through their inventions, they alter the behavior of millions of people, yet very few of them realize that this is what they are doing, and even fewer consider the ethical implications of that kind of power.

(via Andy)

The Web Doozy

Oof this whole demise of the web is such a doozy. I don’t think it necessarily has much to do with the technology, though (though the web technology isn’t really the greatest to begin with). But the web is and has always been a social phenomenon — that’s why people got computers! They wanted computers to get on the web, and they wanted to get on the web because everyone else was getting on the web! It was novel and it was exciting and it was entertaining. It brought the world to your desk — emails, newses, games.

Computers before the web were basically relegated to whatever software you bought (from a store, usually). They could do lots of things, but it was very much a “one at a time” thing. The web comes then suddenly you’ve got endlessness in front of you. I think in many ways the web is (and has always been) a lot closer to TV than it has been to “traditional” software. The web is your “there’s 400 channels and nothin’s on” situation, except there are infinite channels and still nothing is on.

And so I think it’s really like TV because it encourages almost twitchy behaviour, via links. “Hey what’s this? and this? and that? and these?” And it’s very entertaining and stimulating (or at least minimum viable stimulation). Much like TV where when one show ends, another comes on right after it, so too is the web neverending.

And all of that is to say nothing of the content of the web! which I think has become more and more like TV lately. The links-as-channel-changers really caught on and so every website fights to keep you around. Also as bandwidth increases, photos and video become easier to use, and so now it’s easier to look at pictures and, much like TV, be entertained by video. And now you can have many videos playing at once in a feed stream and O the humanity the people making the stuff don’t really care very much so long as you’re watching it and whatever way they’re making money off it.

Reality TV and social networks feel very contemporary to me. Facebook (and really any social network) looks quite a lot like reality tv if you want it to look like reality tv. “Haha, I know Jersey Shore is stupid, I just watch it to make fun of it,” is even more compelling when it’s your idiot cousin on Facebook. It’s The Truman Show except you’re the one starring in it and everybody else you know is also starring in it and the door at the edge of the ocean is actually open but you can’t walk through it because you know you can’t watch once you’re outside. And but so now that everybody’s in this reality tv show nobody cares where the set is, if it’s on the web or if it’s on an app it makes no difference to them. They don’t see it as a democracy versus a dictatorship, they see it as VHS versus a DVD.

The good news is anyone (with enough money) can remake the web. The bad news is almost nobody cares or wants to.

It only takes 40 seconds to watch every single line spoken by people of color in “Her”  

William Hughes:

The film industry’s problems with representing people of color in mainstream movies are well known and documented, but it can still be shocking when someone finds a way to display them in a new and meaningful way. Cue actor Dylan Marron—known to podcasting fans as Carlos the scientist from the popular Welcome To Night Vale—who recently began posting videos on his YouTube channel of every line spoken by people of color in various popular films. And, surprise, surprise, it turns out that they’re not very long videos at all.

Her has fewer than 46 seconds of dialog from people of colour.

The Life Cycle of Programming Languages  

Betsy Haibel:

Rails, a popular Ruby-based web framework, was born in 2005. It billed itself as an “opinionated” framework; creator David Heinemeier Hansson likes to characterize Rails as “omakase,” his culturally appropriative way of saying his technical decisions are better than anyone else’s.

Rails got its foothold by being the little outsider that stood against enterprise Java’s vast monoliths and excesses: programming excesses, workflow excesses, and certainly its excesses of corporate politesse. In two representative 2009 pieces, DHH described himself as a “R-Rated Individual,” who believed innovation required “a few steps over [the line].” The line, in this case, was softcore pornography presented in a talk of Matt Aimonetti’s; Aimonetti did not adequately warn for sexual content, and was largely supported in his mistakes by the broader community. Many women in Ruby continue to view the talk’s fallout as a jarring, symbolic wound. […]

Technical affiliations, as Yang and Rabkin point out, are often determined by cultural signaling as much or more than technical evaluation. When Rails programmers fled enterprise Java, they weren’t only fleeing AbstractBeanFactoryCommandGenerators, the Kingdom of Nouns. They were also fleeing HR departments, “political correctness,” structure, process, heterogeneity. The growing veneer of uncool. Certainly Rails’ early marketing was more anti-enterprise, and against how Java was often used, than it was anti-Java — while Java is more verbose, the two languages are simply not that different. Rails couldn’t sell itself as not object-oriented; it was written in an OO language. Instead, it sold itself as better able to leverage OO. While that achievement sounds technical on the surface, Rails’ focus on development speed and its attacks on enterprise architects’ toys were fundamentally attacks on the social structures of enterprise software development. It was selling an escape from an undesirable culture; its technical advantages existed to enable that escape.

It’s easy to read these quotes and look at it as an isolated example, but it’s not. It’s evidence of larger systems and structures at play, and Rails is but one example of that.

More and more I realize our technology doesn’t exist in a vacuum. Who creates and and who benefits from it usually depend on large social systems, and we need to be mindful of them and work to stop them from marginalizing groups.

You could do worse than reading Model View Culture’s outlook on that.

“I’m 28, I just quit my tech job, and I never want another job again”  

Eevee:

I don’t have a problem with trying to reinvent the wheel for its own sake, just to see if I can make a better wheel. But that’s not what we were doing. We were reinventing the wheel when the goal was to build a car, and the existing wheel was just too round or not round enough, and while we’re at it, let’s rethink that whole windshield idea, and I don’t know what a carburetor is so we probably don’t need it, and wow this is taking a long time so maybe we should hire a hundred more people? You don’t even get the satisfaction of tinkering with the wheel, because the car is so far behind schedule that your wheel will be considered finished as soon as it rolls well enough.

Sustainable Indie Apps

Yesterday Brent Simmons wrote about how unsustainable it is to be an indie iOS developer:

Yes, there are strategies for making a living, and nobody’s entitled to anything. But it’s also true that the economics of a thing may be generally favorable or generally unfavorable — and the iOS App Store is, to understate the case, generally unfavorable. Indies don’t have a fighting chance.

And that we might be better off doing indie iOS as a labour of love instead of a sustainable business:

Write the apps you want to write in your free time and out of love for the platform and for those specific apps. Take risks. Make those apps interesting and different. Don’t play it safe. If you’re not expecting money, you have nothing to lose.

He suggests one reason for the unsustainability in the App Store is the fact that it’s crowded, that it’s too easy to make apps:

You might think that development needs to get easier in order to make writing apps economically viable. The problem is that we’ve seen what happens when development gets easier — we get a million apps on the iOS App Store. The easier development gets, the more apps we see.

Allen Pike, responding to Brent’s article brings up a similar sentiment (which Brent later quoted):

However, when expressing frustration with the current economics of the App Store, we need to consider the effect of this mass supply of enthusiastic, creative developers. As it gets ever easier to write apps, and we’re more able to express our creativity by building apps, the market suffers more from the economic problems of other creative fields.

I think this argument misses the problem and misses it in a big way. It has little to do with how many people are making apps (in business, this is known as “competition” and it’s an OK thing!). The problem is that people aren’t paying for apps because people don’t value apps, generally speaking (I’ve written about this before, too).

You might ask, “why don’t they value apps?” but I think if you turn the question around to “why aren’t apps valuable to people?” it becomes a little easier to see the problem. Is it really so unbelievable the app you’re trying to sell for 99 cents doesn’t provide that much value to your customers? Your customers don’t care about making things up in volume, they don’t care about the reach of the app store, they only care about the value of your software.

(Brief interlude to say that of course, “value” is not the only (or even necessarily, the most important) factor when it comes to a capitalist market. We’re taught that customers decide solely based on value, but they are obviously continuously manipulated by many factors, including advertising, social pressures and structures, etc. But I’m going to give you the benefit of the doubt that you actually care about creating value for people and run with that assumption).

I want to be clear I’m not suggesting price determines value (it may influence perceived value, but it doesn’t determine it entirely), but I’m saying if you’re pricing your app at 99 cents, you’re probably doing so because your app doesn’t provide very much value. Taking a 99 cent app and pricing it at 20$ probably isn’t going to significantly increase its value in the eyes of your customers.

What I am saying is if you want a sustainable business, you’ve got to provide value for people. Your app needs to be worth people paying enough money that you can keep your lights on. Omni can get away with charging 50$ for their iPhone apps because people who use it think it’s worth that price. Something like OmniFocus isn’t comparable to a 99 cent app — it’s much more sophisticated and simply does more than most apps.

Value doesn’t have to come from having loads of features, but they might help get you there. Most people probably wouldn’t say “Github is worth paying for because it has a ton of features” but they might say “Github is worth paying for because I couldn’t imagine writing software without it.”

These aren’t new ideas. My friend Joe has been writing and talking about this for a long time (he’s even running a conference about it!), for example.

As Curtis Herbert points out, it’s probably never going to be easy to run your own indie iOS shop. But that doesn’t mean it’s impossible. It just means you’ve got to build a business along the way.

Real

There’s something unsettling about the word “real” when used in phrases like “real job” or “real adult” or “real programming language,” and although I think we often use it without bad intent, I think it often ends up harming and belittling people on the receiving end.

Saying “real something” is implicitly saying other things aren’t real enough or aren’t in some way valid. We often associate this with a professional job.

It’s kind of demeaning to say a blogger isn’t a “real writer.” What’s often meant instead is the blogger is not a professional or paid writer, but that doesn’t mean what they write is any less real.

The good news is if you write words then you are a real writer!

As someone who has worked on programming languages for learning at both Hopscotch and Khan Academy, I’ve heard the term “real programming” more than I’d like to admit.

Never in my life have I heard so many paid, professional programmers demean a style of programming so frequently as they do programming languages for learning. I’ve even been told, vehemently so, by a cofounder of a learn-to-Javascript startup “we don’t believe in teaching toy languages, we only want to teach people real programming.”

What is a real programming language anyway? I think if something is meant to be educational it’s often immediately dismissed by many programmers. They’ll often say it’s not a real language because you can’t write “real” programs in it, where “real” typically means “you type code” or “you can write an operating system in it.”

The good news is if your program is Turing complete then it is real programming!

Our history has shown time and time again the things we don’t consider “real” usually become legitimized in good time. Why do we exclude people and keep our minds so narrow to the things we love?

Special thanks to Steph Jang for the conversation that inspired this.

College Rape Prevention Program Proves a Rare Success  

Jan Hoffman for the New York Times:

In a randomized trial, published in The New England Journal of Medicine, first-year students at three Canadian campuses attended sessions on assessing risk, learning self-defense and defining personal sexual boundaries. The students were surveyed a year after they completed the intervention.

The risk of rape for 451 women randomly assigned to the program was about 5 percent, compared with nearly 10 percent among 442 women in a control group who were given brochures and a brief information session. […]

Other researchers praised the trial as one of the largest and most promising efforts in a field pocked by equivocal or dismal results. But some took issue with the philosophy underlying the program’s focus: training women who could potentially be victims, rather than dealing with the behavior and attitudes of men who could potentially be perpetrators.

Awareness Is Overrated  

Jesse Singal:

We’re living in something of a golden age of awareness-raising. Cigarette labels relay dire facts about the substances contained within. Billboards and PSAs and YouTube videos highlight the dangers of fat and bullying and texting while driving. Hashtag activism, the newest awareness-raising technique, abounds: After the La Isla Vista shootings, many women used the #YesAllWomen hashtag to relate their experiences with misogyny; and a couple months before that, #CancelColbert brought viral attention to some people’s anger with Stephen Colbert over what they saw as a racist joke. Never before has raising awareness about various dangers and struggles been such a visible part of everyday life and conversation.

But the funny part about all of this awareness-raising is that it doesn’t accomplish all that much. The underlying assumption of so many attempts to influence people’s behavior — that they make bad choices because they lack the information to empower them to do otherwise — is, except in a few cases, false. And what’s worse, awareness-raising done in the wrong way can actually backfire, encouraging the negative activities in question. One of the favorite pastimes of a certain brand of concerned progressive, then, may be much more effective at making them feel good about themselves than actually improving the world in any substantive way.

Ash Furrow’s WWDC 2015 Keynote Reactions  

Ash Furrow on Apple’s shift towards the term “Engineer”:

In my native land of Canada, the term “Engineer” is a protected term, like “Judge” or “Doctor” – I can’t just go out and claim to be a software engineer. I am a (lowly) software developer.

What separates engineering from developing?

In my opinion, discipline. […]

Software developers make code. Software engineers make products.

I would actually contend the opposite. I would say software engineers make the code and a software developers make the product. I’d also like to point out I’m not trying to make a value judgement on either part of the job, just the distinction.

Developers are those developing the product. To me that includes things like design, resources, planning, etc. A software engineer (in the non-protected word sense) is somebody specifically specializing in the act of building the code. This includes things like designing systems and “architecture,” but their primary area of focus is eventually implementing those systems in code. In this view, I see a software engineer as belonging to the set of software developers.

This is why at WWDC we see talks about design, prototyping, audio, and accessibility. These are roles in the realm of software development right alongside engineering as well.

A Partial List of Questions About the Native Apple Watch SDK

Marco Arment wrote last week about a partial list of questions about the Native Apple Watch SDK and I thought I’d do the same and add mine below:

Will developers waste months of their time developing tiny widgets for every imaginable kind of app? Are they making watch versions of iPhone apps that really should have just been web pages in the first place? Will the widgets show almost no information thanks to the tiny screen size and the immutable laws of physics?

Will the SDK be buggy during the betas? Will compile times be slow due to Swift? Will the betas goad developers into filing thousands of Radars that Apple developers will never fix because the Apple Watch is a distraction for Apple’s developers in addition to seemingly every 3rd party developer as well?

Will I finally be able to connect and share moments with the ones I love all from the comfort of my own watch? Will more notifications buzzing on my arm finally make me feel important like I’ve always dreamed of? Will it at least get me more followers on Twitter? Jesus where is my Uber?

Will the Native Apple Watch SDK improve in any significant way computing for a large number of people? Will a luxury timekeeping computing device bring us together or drive us apart? Will a native SDK improve or harm that?

Will it help us understand complex problems? Will it help us devise solutions to these problems?

Will the SDK help me realize the destructive tendencies of a capitalist lifestyle? Will the SDK make developers want to buy a new Apple Watch every year because all these native apps slow their watches down and because well they have two arms anyway so what’s the harm in buying another? Will I think about the people living elsewhere in the world who manufactured the watch? Will I think about where “away” is when I throw the watch away? Will I think about how WWDC is celebrating me for changing the world despite my immense privilege enabling me to become a professional software developer and live in a celebrated bubble because me and people like me are like, real good at helping Apple sell more watches and iPhones?

Can you feel my heartbeat?

Google’s Advanced Technology and Projects Demonstrations

This week during Google I/O, we were given glimpses of some of the company’s ATAP projects. The two projects, both accompanied by short videos, focus on new methods of physical interaction.

Jacquard (video) is “a new system for weaving technology into fabric, transforming everyday objects, like clothes, into interactive surfaces.” This allows clothing to effectively become a multitouch surface, presumably to control nearby computers like smartphones or televisions.

Soli (video) is “a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects.” The chip recognizes small gestures made with your fingers or hands.

Let’s assume the technology shown in each demo works really well, which is certainly possible given Google’s track record for attracting incredible technical talent. It seems very clear to me Google has no idea what to do with these technologies, or if they do, they’re not saying. The Soli demo has people tapping interface buttons in the air and the Jacquard demo has people multitouching their clothes to scroll or make a phone call. Jacquard project founder Ivan Poupyrev even says it “is a blank canvas and we’re really excited to see what designers and developers will do with it.”

This is impressive technology and an important hardware step towards the future of interaction, but we’re getting absolutely no guidance on what this new kind of interaction should actually be, or why we’d use it. And the best we’re shown is a poor imitation of old computer interfaces. We’re implicitly being told existing computer interfaces are definitively the way we should manipulate the digital medium. We’re making an assumption and acting as if it were true without actually questioning it.

Emulating a button press or a slider scroll is not only disappointing but it’s also a step backwards. When we lose the direct connection with the device graphics being manipulated, the interaction becomes a weird remote control with no remote to tell us we’ve even made a click. This technology is useless if all we do with it is poorly emulate our existing steampunk interfaces of buttons and knobs and levers and sliders.

If you want inspiration for truly better human computer interfaces, I highly suggest checking out non-digital artists and craftspeople and their tools. Look at how a painter or an illustrator works. What does their environment look like? What tools do they have and how do they use them? How do they move their tools and what is the outcome of that? How much freedom do their tools afford them?

Look to musicians to see an expressive harmony between player and instrument. Look at the range of sound, volume, tempo a single person and single instrument can make. Look at how the hands and fingers are used, how the mouth and lungs are used, how the eyes are used. Look at how the instruments are positioned relative to the player’s body and relative to other players.

Look at how a dancer moves their body. Look at how every bone and muscle and joint is a possible degree of freedom. Look at how precise the movement can be controlled, how many possible poses and formations within a space. Look at how dancers interplay with each other, with the space, with the music, and with the audience.

And then look at the future being sold to you. Look at your hand outstretched in front of a smartphone screen lying on a table. Look at your finger and thumb clicking a pretend button to dismiss a dialog box. Look at your finger gliding over your sleeve to fast-forward a movie you’re watching on Netflix.

Is this the future you want? Do you want to twiddle your thumbs or do you want to dance with somebody?

Mobile (apparently) Isn’t Killing the Desktop  

Who knows, maybe I was too quick to jump the gun:

According to data from comScore, for example, the overall time spent online with desktop devices in the U.S. has remained relatively stable for the past two years. Time spent with mobile devices has grown rapidly in that time, but the numbers suggest mobile use is adding to desktop use, not subtracting from it.

(The article just repeats that paragraph about five times).

About the Futures

Another thing I wanted to mention about thinking about the future is that: it’s complicated. When it comes to the future we rarely really have the future; instead what we have are a bunch of different parts, like people or places or things or circumstances, etc., all relating and working together in a complex way. From this lens, “the future” really represents the state of a big ol’ system of everything we know. Lovely, isn’t it?

So if you want to “invent the future” you have to understand your actions, inventions, etc. do not exist in a vacuum (unless you work at Dyson), but instead will have an interplay with everything.

And this helps direct a lot of what’s possible without really great efforts. If I wanted to make my own smartphone, with the intention of making the number one smartphone in the world, I can’t really do that. Today, I can’t make a better smartphone than Apple. There are really only a few major players who can play in this game with a possibility of being the best.

I’m not trying to be defeatist here, but I am trying to point out you can’t just invent the future you want in all cases. The trick is to be aware of the kind of future you want. You can’t win at the smartphone game today. Apple itself couldn’t even be founded today. If you’re trying to compete with things like these then you have to realize you’re not really inventing the future but you’re inventing the present instead.

Be mindful of the systems that exist, or else you’ll be inventing tomorrow’s yesterday.

Setting My Ways

I’ve heard the possibly apocryphal advice that a person’s twenties are very important years in their life because those are the years when you effectively get “set in your ways,” when you develop the habits you’ll be living with for the rest of your life. As I’m a person who’s about to turn 27 whole years old, this has been on my mind lately. I’ve seen lots of people in their older years who are clearly set in their ways or their beliefs, who have no real inclination to change. What works for them will always continue to work for them.

If this is an inevitable part of aging, then I want to set myself in the best ways, with the best habits to carry me forward. Most of these habits will be personal, like taking good care of my health, keeping my body in good shape, and taking good care of my mind. But I think the most important habit for me is to always keep my mind learning, always changing and open to new ideas. That’s the most important habit I think I can develop over the next few years.

Keeping with this theme, I want to keep my mind open with technology as well. Already am I feeling it’s too easy to say “No, I’m not interested in that topic because how I do things works for me already,” mostly when it comes to technical things (I am loathe to use Auto Layout or write software for the Apple Watch). I don’t like to update my operating systems until I can no longer use any apps, because what I use is often less buggy than the newest release.

These habits I’m less concerned about because newer OS releases and newer APIs in many ways seem like a step backwards to me (although I realize this might just be a sign of me already set in my ways!). I’m more concerned about the way I perceive software in a more abstract way.

To me, “software” has always been something that runs as an app or a website, on a computer with a keyboard and mouse. As a longtime user of and developer for smartphones, I know software runs quite well on these devices as well, but it always feels subpar to me. In my mind, it’s hard to do “serious” kinds of work. I know iPhones and iPads can be used for creation and “serious” work, but I also know doing the same tasks typically done on a desktop are much more arduous on a touch screen.

Logically, I know this is a dead end. I know many people are growing up with smartphones as their only computer. I know desktops will seem ancient to them. I know in many countries, desktop computers are almost non-existent. I know there are people writing 3000 word school essays and I know these sorts of things will only increase over time. But it defies my common sense.

There are all kinds of ideas foreign to my common sense coming out of mobile software these days. Many popular apps in China exist in some kind of text messaging / chat user interface and messaging in general is changing the way people are interacting with the companies and services providing their software in other places, too. Snapchat is a service I haven’t even tried to understand, but they are rethinking how videos work on a mobile phone, while John Herrman at The Awl describes a potential app-less near-future.

As long as I hold on to my beliefs that software exists as an app or website on a device with a keyboard and mouse, I’m doomed to living in a world left behind.

I’ve seen it happen to people I respect, too. I love the concept of Smalltalk (and I’ll make smalltalk about Smalltalk to anyone who’ll listen) but I can’t help but feel it’s a technological ideal for a world that no longer exists. In some ways, it feels like we’ve missed the boat on using a computer as a powerful means of expression instead what we got is a convenient means of entertainment.

My point isn’t really about any particular trend. My point is to remind myself that what “software” is is probably always going to remain in flux, tightly related to things like social change or the way of the markets. Software evolves and changes over time, but that evolution doesn’t necessarily line up with progress, it’s just different.

Alan Kay said the best way to predict the future is to invent it. But I think you need to understand that the future’s going to be different, first.

Stalking Your Friends with Facebook Messenger  

Great research by Aran Khanna:

As you may know, when you send a message from the Messenger app there is an option to send your location with it. What I realized was that almost every other message in my chats had a location attached to it, so I decided to have some fun with this data. I wrote a Chrome extension for the Facebook Messenger page (https://www.facebook.com/messages/) that scrapes all this location data and plots it on a map. You can get this extension here and play around with it on your message data. […]

This means that if a few people who I am chatting with separately collude and send each other the locations I share with them, they would be able to track me very accurately without me ever knowing.

Even if you know how invasive Facebook is, this still seems shocking. Why?

These sorts of things always seem shocking because we don’t usually see them in the aggregate. Sending your location one message at a time seems OK, but it’s not until you see all the data together that it becomes scary.

We’re used to seeing data moment to moment, as if looking through a pinhole. We’re oblivious to the larger systems at work, and it’s not until we step up the ladder of abstraction do we start to see a bigger picture.

Say what you will of the privacy issues inherent in this discovery, but I think the bigger problem is our collective inability to understand and defend ourselves from these sorts of systems.

The Plot Against Trains  

Adam Gopnik:

What is less apparent, perhaps, is that the will to abandon the public way is not some failure of understanding, or some nearsighted omission by shortsighted politicians. It is part of a coherent ideological project. As I wrote a few years ago, in a piece on the literature of American declinism, “The reason we don’t have beautiful new airports and efficient bullet trains is not that we have inadvertently stumbled upon stumbling blocks; it’s that there are considerable numbers of Americans for whom these things are simply symbols of a feared central government, and who would, when they travel, rather sweat in squalor than surrender the money to build a better terminal.” The ideological rigor of this idea, as absolute in its way as the ancient Soviet conviction that any entering wedge of free enterprise would lead to the destruction of the Soviet state, is as instructive as it is astonishing. And it is part of the folly of American “centrism” not to recognize that the failure to run trains where we need them is made from conviction, not from ignorance.

Yeah, what good has the American government ever done for America?

The Web and HTTP  

John Gruber repeating his argument that because mobile apps usually use HTTP, that’s still the web:

I’ve been making this point for years, but it remains highly controversial. HTML/CSS/JavaScript rendered in a web browser — that part of the web has peaked. Running servers and client apps that speak HTTP(S) — that part of the web continues to grow and thrive.

But I call bullshit. HTTP is not is not what gives the web its webiness. Sure, it’s a part of the web stack, but so is TCP/IP. The web could have been implemented over any number of protocols and it wouldn’t have made a big difference.

What makes the web the web is the open connections between documents or “apps,” the fact that anybody can participate on a mostly-agreed-upon playing field. Things like Facebook Instant Articles or even Apple’s App Store are closed up, do not allow participation by every person or every idea, and don’t really act like a “web” at all. And they could have easily been built on FTP or somesuch and it wouldn’t make a lick of difference.

It may well be the “browser web” John talks about has peaked, but I think it’s incorrect to say the web is still growing because apps are using HTTP.

The Likely Cause of Addiction Has Been Discovered, and It Is Not What You Think  

Johann Hari:

One of the ways this theory was first established is through rat experiments – ones that were injected into the American psyche in the 1980s, in a famous advert by the Partnership for a Drug-Free America. You may remember it. The experiment is simple. Put a rat in a cage, alone, with two water bottles. One is just water. The other is water laced with heroin or cocaine. Almost every time you run this experiment, the rat will become obsessed with the drugged water, and keep coming back for more and more, until it kills itself.

The advert explains: “Only one drug is so addictive, nine out of ten laboratory rats will use it. And use it. And use it. Until dead. It’s called cocaine. And it can do the same thing to you.”

But in the 1970s, a professor of Psychology in Vancouver called Bruce Alexander noticed something odd about this experiment. The rat is put in the cage all alone. It has nothing to do but take the drugs. What would happen, he wondered, if we tried this differently? So Professor Alexander built Rat Park. It is a lush cage where the rats would have colored balls and the best rat-food and tunnels to scamper down and plenty of friends: everything a rat about town could want. What, Alexander wanted to know, will happen then?