Speed of Light
Modern Medicine  

A great essay by Jonathan Harris on how software shapes us, and the responsibilities latent in our industry:

We inhabit an interesting time in the history of humanity, where a small number of people, numbering not more than a few hundred, but really more like a few dozen, mainly living in cities like San Francisco and New York, mainly male, and mainly between the ages of 22 and 35, are having a hugely outsized effect on the rest of our species.

Through the software they design and introduce to the world, these engineers transform the daily routines of hundreds of millions of people. Previously, this kind of mass transformation of human behavior was the sole domain of war, famine, disease, and religion, but now it happens more quietly, through the software we use every day, which affects how we spend our time, and what we do, think, and feel.

In this sense, software can be thought of as a new kind of medicine, but unlike medicine in capsule form that acts on a single human body, software is a different kind of medicine that acts on the behavioral patterns of entire societies.

The designers of this software call themselves “software engineers”, but they are really more like social engineers.

Through their inventions, they alter the behavior of millions of people, yet very few of them realize that this is what they are doing, and even fewer consider the ethical implications of that kind of power.

(via Andy)

The Web Doozy

Oof this whole demise of the web is such a doozy. I don’t think it necessarily has much to do with the technology, though (though the web technology isn’t really the greatest to begin with). But the web is and has always been a social phenomenon — that’s why people got computers! They wanted computers to get on the web, and they wanted to get on the web because everyone else was getting on the web! It was novel and it was exciting and it was entertaining. It brought the world to your desk — emails, newses, games.

Computers before the web were basically relegated to whatever software you bought (from a store, usually). They could do lots of things, but it was very much a “one at a time” thing. The web comes then suddenly you’ve got endlessness in front of you. I think in many ways the web is (and has always been) a lot closer to TV than it has been to “traditional” software. The web is your “there’s 400 channels and nothin’s on” situation, except there are infinite channels and still nothing is on.

And so I think it’s really like TV because it encourages almost twitchy behaviour, via links. “Hey what’s this? and this? and that? and these?” And it’s very entertaining and stimulating (or at least minimum viable stimulation). Much like TV where when one show ends, another comes on right after it, so too is the web neverending.

And all of that is to say nothing of the content of the web! which I think has become more and more like TV lately. The links-as-channel-changers really caught on and so every website fights to keep you around. Also as bandwidth increases, photos and video become easier to use, and so now it’s easier to look at pictures and, much like TV, be entertained by video. And now you can have many videos playing at once in a feed stream and O the humanity the people making the stuff don’t really care very much so long as you’re watching it and whatever way they’re making money off it.

Reality TV and social networks feel very contemporary to me. Facebook (and really any social network) looks quite a lot like reality tv if you want it to look like reality tv. “Haha, I know Jersey Shore is stupid, I just watch it to make fun of it,” is even more compelling when it’s your idiot cousin on Facebook. It’s The Truman Show except you’re the one starring in it and everybody else you know is also starring in it and the door at the edge of the ocean is actually open but you can’t walk through it because you know you can’t watch once you’re outside. And but so now that everybody’s in this reality tv show nobody cares where the set is, if it’s on the web or if it’s on an app it makes no difference to them. They don’t see it as a democracy versus a dictatorship, they see it as VHS versus a DVD.

The good news is anyone (with enough money) can remake the web. The bad news is almost nobody cares or wants to.

It only takes 40 seconds to watch every single line spoken by people of color in “Her”  

William Hughes:

The film industry’s problems with representing people of color in mainstream movies are well known and documented, but it can still be shocking when someone finds a way to display them in a new and meaningful way. Cue actor Dylan Marron—known to podcasting fans as Carlos the scientist from the popular Welcome To Night Vale—who recently began posting videos on his YouTube channel of every line spoken by people of color in various popular films. And, surprise, surprise, it turns out that they’re not very long videos at all.

Her has fewer than 46 seconds of dialog from people of colour.

The Life Cycle of Programming Languages  

Betsy Haibel:

Rails, a popular Ruby-based web framework, was born in 2005. It billed itself as an “opinionated” framework; creator David Heinemeier Hansson likes to characterize Rails as “omakase,” his culturally appropriative way of saying his technical decisions are better than anyone else’s.

Rails got its foothold by being the little outsider that stood against enterprise Java’s vast monoliths and excesses: programming excesses, workflow excesses, and certainly its excesses of corporate politesse. In two representative 2009 pieces, DHH described himself as a “R-Rated Individual,” who believed innovation required “a few steps over [the line].” The line, in this case, was softcore pornography presented in a talk of Matt Aimonetti’s; Aimonetti did not adequately warn for sexual content, and was largely supported in his mistakes by the broader community. Many women in Ruby continue to view the talk’s fallout as a jarring, symbolic wound. […]

Technical affiliations, as Yang and Rabkin point out, are often determined by cultural signaling as much or more than technical evaluation. When Rails programmers fled enterprise Java, they weren’t only fleeing AbstractBeanFactoryCommandGenerators, the Kingdom of Nouns. They were also fleeing HR departments, “political correctness,” structure, process, heterogeneity. The growing veneer of uncool. Certainly Rails’ early marketing was more anti-enterprise, and against how Java was often used, than it was anti-Java — while Java is more verbose, the two languages are simply not that different. Rails couldn’t sell itself as not object-oriented; it was written in an OO language. Instead, it sold itself as better able to leverage OO. While that achievement sounds technical on the surface, Rails’ focus on development speed and its attacks on enterprise architects’ toys were fundamentally attacks on the social structures of enterprise software development. It was selling an escape from an undesirable culture; its technical advantages existed to enable that escape.

It’s easy to read these quotes and look at it as an isolated example, but it’s not. It’s evidence of larger systems and structures at play, and Rails is but one example of that.

More and more I realize our technology doesn’t exist in a vacuum. Who creates and and who benefits from it usually depend on large social systems, and we need to be mindful of them and work to stop them from marginalizing groups.

You could do worse than reading Model View Culture’s outlook on that.

“I’m 28, I just quit my tech job, and I never want another job again”  


I don’t have a problem with trying to reinvent the wheel for its own sake, just to see if I can make a better wheel. But that’s not what we were doing. We were reinventing the wheel when the goal was to build a car, and the existing wheel was just too round or not round enough, and while we’re at it, let’s rethink that whole windshield idea, and I don’t know what a carburetor is so we probably don’t need it, and wow this is taking a long time so maybe we should hire a hundred more people? You don’t even get the satisfaction of tinkering with the wheel, because the car is so far behind schedule that your wheel will be considered finished as soon as it rolls well enough.

Sustainable Indie Apps

Yesterday Brent Simmons wrote about how unsustainable it is to be an indie iOS developer:

Yes, there are strategies for making a living, and nobody’s entitled to anything. But it’s also true that the economics of a thing may be generally favorable or generally unfavorable — and the iOS App Store is, to understate the case, generally unfavorable. Indies don’t have a fighting chance.

And that we might be better off doing indie iOS as a labour of love instead of a sustainable business:

Write the apps you want to write in your free time and out of love for the platform and for those specific apps. Take risks. Make those apps interesting and different. Don’t play it safe. If you’re not expecting money, you have nothing to lose.

He suggests one reason for the unsustainability in the App Store is the fact that it’s crowded, that it’s too easy to make apps:

You might think that development needs to get easier in order to make writing apps economically viable. The problem is that we’ve seen what happens when development gets easier — we get a million apps on the iOS App Store. The easier development gets, the more apps we see.

Allen Pike, responding to Brent’s article brings up a similar sentiment (which Brent later quoted):

However, when expressing frustration with the current economics of the App Store, we need to consider the effect of this mass supply of enthusiastic, creative developers. As it gets ever easier to write apps, and we’re more able to express our creativity by building apps, the market suffers more from the economic problems of other creative fields.

I think this argument misses the problem and misses it in a big way. It has little to do with how many people are making apps (in business, this is known as “competition” and it’s an OK thing!). The problem is that people aren’t paying for apps because people don’t value apps, generally speaking (I’ve written about this before, too).

You might ask, “why don’t they value apps?” but I think if you turn the question around to “why aren’t apps valuable to people?” it becomes a little easier to see the problem. Is it really so unbelievable the app you’re trying to sell for 99 cents doesn’t provide that much value to your customers? Your customers don’t care about making things up in volume, they don’t care about the reach of the app store, they only care about the value of your software.

(Brief interlude to say that of course, “value” is not the only (or even necessarily, the most important) factor when it comes to a capitalist market. We’re taught that customers decide solely based on value, but they are obviously continuously manipulated by many factors, including advertising, social pressures and structures, etc. But I’m going to give you the benefit of the doubt that you actually care about creating value for people and run with that assumption).

I want to be clear I’m not suggesting price determines value (it may influence perceived value, but it doesn’t determine it entirely), but I’m saying if you’re pricing your app at 99 cents, you’re probably doing so because your app doesn’t provide very much value. Taking a 99 cent app and pricing it at 20$ probably isn’t going to significantly increase its value in the eyes of your customers.

What I am saying is if you want a sustainable business, you’ve got to provide value for people. Your app needs to be worth people paying enough money that you can keep your lights on. Omni can get away with charging 50$ for their iPhone apps because people who use it think it’s worth that price. Something like OmniFocus isn’t comparable to a 99 cent app — it’s much more sophisticated and simply does more than most apps.

Value doesn’t have to come from having loads of features, but they might help get you there. Most people probably wouldn’t say “Github is worth paying for because it has a ton of features” but they might say “Github is worth paying for because I couldn’t imagine writing software without it.”

These aren’t new ideas. My friend Joe has been writing and talking about this for a long time (he’s even running a conference about it!), for example.

As Curtis Herbert points out, it’s probably never going to be easy to run your own indie iOS shop. But that doesn’t mean it’s impossible. It just means you’ve got to build a business along the way.


There’s something unsettling about the word “real” when used in phrases like “real job” or “real adult” or “real programming language,” and although I think we often use it without bad intent, I think it often ends up harming and belittling people on the receiving end.

Saying “real something” is implicitly saying other things aren’t real enough or aren’t in some way valid. We often associate this with a professional job.

It’s kind of demeaning to say a blogger isn’t a “real writer.” What’s often meant instead is the blogger is not a professional or paid writer, but that doesn’t mean what they write is any less real.

The good news is if you write words then you are a real writer!

As someone who has worked on programming languages for learning at both Hopscotch and Khan Academy, I’ve heard the term “real programming” more than I’d like to admit.

Never in my life have I heard so many paid, professional programmers demean a style of programming so frequently as they do programming languages for learning. I’ve even been told, vehemently so, by a cofounder of a learn-to-Javascript startup “we don’t believe in teaching toy languages, we only want to teach people real programming.”

What is a real programming language anyway? I think if something is meant to be educational it’s often immediately dismissed by many programmers. They’ll often say it’s not a real language because you can’t write “real” programs in it, where “real” typically means “you type code” or “you can write an operating system in it.”

The good news is if your program is Turing complete then it is real programming!

Our history has shown time and time again the things we don’t consider “real” usually become legitimized in good time. Why do we exclude people and keep our minds so narrow to the things we love?

Special thanks to Steph Jang for the conversation that inspired this.

College Rape Prevention Program Proves a Rare Success  

Jan Hoffman for the New York Times:

In a randomized trial, published in The New England Journal of Medicine, first-year students at three Canadian campuses attended sessions on assessing risk, learning self-defense and defining personal sexual boundaries. The students were surveyed a year after they completed the intervention.

The risk of rape for 451 women randomly assigned to the program was about 5 percent, compared with nearly 10 percent among 442 women in a control group who were given brochures and a brief information session. […]

Other researchers praised the trial as one of the largest and most promising efforts in a field pocked by equivocal or dismal results. But some took issue with the philosophy underlying the program’s focus: training women who could potentially be victims, rather than dealing with the behavior and attitudes of men who could potentially be perpetrators.

Awareness Is Overrated  

Jesse Singal:

We’re living in something of a golden age of awareness-raising. Cigarette labels relay dire facts about the substances contained within. Billboards and PSAs and YouTube videos highlight the dangers of fat and bullying and texting while driving. Hashtag activism, the newest awareness-raising technique, abounds: After the La Isla Vista shootings, many women used the #YesAllWomen hashtag to relate their experiences with misogyny; and a couple months before that, #CancelColbert brought viral attention to some people’s anger with Stephen Colbert over what they saw as a racist joke. Never before has raising awareness about various dangers and struggles been such a visible part of everyday life and conversation.

But the funny part about all of this awareness-raising is that it doesn’t accomplish all that much. The underlying assumption of so many attempts to influence people’s behavior — that they make bad choices because they lack the information to empower them to do otherwise — is, except in a few cases, false. And what’s worse, awareness-raising done in the wrong way can actually backfire, encouraging the negative activities in question. One of the favorite pastimes of a certain brand of concerned progressive, then, may be much more effective at making them feel good about themselves than actually improving the world in any substantive way.

Ash Furrow’s WWDC 2015 Keynote Reactions  

Ash Furrow on Apple’s shift towards the term “Engineer”:

In my native land of Canada, the term “Engineer” is a protected term, like “Judge” or “Doctor” – I can’t just go out and claim to be a software engineer. I am a (lowly) software developer.

What separates engineering from developing?

In my opinion, discipline. […]

Software developers make code. Software engineers make products.

I would actually contend the opposite. I would say software engineers make the code and a software developers make the product. I’d also like to point out I’m not trying to make a value judgement on either part of the job, just the distinction.

Developers are those developing the product. To me that includes things like design, resources, planning, etc. A software engineer (in the non-protected word sense) is somebody specifically specializing in the act of building the code. This includes things like designing systems and “architecture,” but their primary area of focus is eventually implementing those systems in code. In this view, I see a software engineer as belonging to the set of software developers.

This is why at WWDC we see talks about design, prototyping, audio, and accessibility. These are roles in the realm of software development right alongside engineering as well.

A Partial List of Questions About the Native Apple Watch SDK

Marco Arment wrote last week about a partial list of questions about the Native Apple Watch SDK and I thought I’d do the same and add mine below:

Will developers waste months of their time developing tiny widgets for every imaginable kind of app? Are they making watch versions of iPhone apps that really should have just been web pages in the first place? Will the widgets show almost no information thanks to the tiny screen size and the immutable laws of physics?

Will the SDK be buggy during the betas? Will compile times be slow due to Swift? Will the betas goad developers into filing thousands of Radars that Apple developers will never fix because the Apple Watch is a distraction for Apple’s developers in addition to seemingly every 3rd party developer as well?

Will I finally be able to connect and share moments with the ones I love all from the comfort of my own watch? Will more notifications buzzing on my arm finally make me feel important like I’ve always dreamed of? Will it at least get me more followers on Twitter? Jesus where is my Uber?

Will the Native Apple Watch SDK improve in any significant way computing for a large number of people? Will a luxury timekeeping computing device bring us together or drive us apart? Will a native SDK improve or harm that?

Will it help us understand complex problems? Will it help us devise solutions to these problems?

Will the SDK help me realize the destructive tendencies of a capitalist lifestyle? Will the SDK make developers want to buy a new Apple Watch every year because all these native apps slow their watches down and because well they have two arms anyway so what’s the harm in buying another? Will I think about the people living elsewhere in the world who manufactured the watch? Will I think about where “away” is when I throw the watch away? Will I think about how WWDC is celebrating me for changing the world despite my immense privilege enabling me to become a professional software developer and live in a celebrated bubble because me and people like me are like, real good at helping Apple sell more watches and iPhones?

Can you feel my heartbeat?

Google’s Advanced Technology and Projects Demonstrations

This week during Google I/O, we were given glimpses of some of the company’s ATAP projects. The two projects, both accompanied by short videos, focus on new methods of physical interaction.

Jacquard (video) is “a new system for weaving technology into fabric, transforming everyday objects, like clothes, into interactive surfaces.” This allows clothing to effectively become a multitouch surface, presumably to control nearby computers like smartphones or televisions.

Soli (video) is “a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects.” The chip recognizes small gestures made with your fingers or hands.

Let’s assume the technology shown in each demo works really well, which is certainly possible given Google’s track record for attracting incredible technical talent. It seems very clear to me Google has no idea what to do with these technologies, or if they do, they’re not saying. The Soli demo has people tapping interface buttons in the air and the Jacquard demo has people multitouching their clothes to scroll or make a phone call. Jacquard project founder Ivan Poupyrev even says it “is a blank canvas and we’re really excited to see what designers and developers will do with it.”

This is impressive technology and an important hardware step towards the future of interaction, but we’re getting absolutely no guidance on what this new kind of interaction should actually be, or why we’d use it. And the best we’re shown is a poor imitation of old computer interfaces. We’re implicitly being told existing computer interfaces are definitively the way we should manipulate the digital medium. We’re making an assumption and acting as if it were true without actually questioning it.

Emulating a button press or a slider scroll is not only disappointing but it’s also a step backwards. When we lose the direct connection with the device graphics being manipulated, the interaction becomes a weird remote control with no remote to tell us we’ve even made a click. This technology is useless if all we do with it is poorly emulate our existing steampunk interfaces of buttons and knobs and levers and sliders.

If you want inspiration for truly better human computer interfaces, I highly suggest checking out non-digital artists and craftspeople and their tools. Look at how a painter or an illustrator works. What does their environment look like? What tools do they have and how do they use them? How do they move their tools and what is the outcome of that? How much freedom do their tools afford them?

Look to musicians to see an expressive harmony between player and instrument. Look at the range of sound, volume, tempo a single person and single instrument can make. Look at how the hands and fingers are used, how the mouth and lungs are used, how the eyes are used. Look at how the instruments are positioned relative to the player’s body and relative to other players.

Look at how a dancer moves their body. Look at how every bone and muscle and joint is a possible degree of freedom. Look at how precise the movement can be controlled, how many possible poses and formations within a space. Look at how dancers interplay with each other, with the space, with the music, and with the audience.

And then look at the future being sold to you. Look at your hand outstretched in front of a smartphone screen lying on a table. Look at your finger and thumb clicking a pretend button to dismiss a dialog box. Look at your finger gliding over your sleeve to fast-forward a movie you’re watching on Netflix.

Is this the future you want? Do you want to twiddle your thumbs or do you want to dance with somebody?

Mobile (apparently) Isn’t Killing the Desktop  

Who knows, maybe I was too quick to jump the gun:

According to data from comScore, for example, the overall time spent online with desktop devices in the U.S. has remained relatively stable for the past two years. Time spent with mobile devices has grown rapidly in that time, but the numbers suggest mobile use is adding to desktop use, not subtracting from it.

(The article just repeats that paragraph about five times).

About the Futures

Another thing I wanted to mention about thinking about the future is that: it’s complicated. When it comes to the future we rarely really have the future; instead what we have are a bunch of different parts, like people or places or things or circumstances, etc., all relating and working together in a complex way. From this lens, “the future” really represents the state of a big ol’ system of everything we know. Lovely, isn’t it?

So if you want to “invent the future” you have to understand your actions, inventions, etc. do not exist in a vacuum (unless you work at Dyson), but instead will have an interplay with everything.

And this helps direct a lot of what’s possible without really great efforts. If I wanted to make my own smartphone, with the intention of making the number one smartphone in the world, I can’t really do that. Today, I can’t make a better smartphone than Apple. There are really only a few major players who can play in this game with a possibility of being the best.

I’m not trying to be defeatist here, but I am trying to point out you can’t just invent the future you want in all cases. The trick is to be aware of the kind of future you want. You can’t win at the smartphone game today. Apple itself couldn’t even be founded today. If you’re trying to compete with things like these then you have to realize you’re not really inventing the future but you’re inventing the present instead.

Be mindful of the systems that exist, or else you’ll be inventing tomorrow’s yesterday.

Setting My Ways

I’ve heard the possibly apocryphal advice that a person’s twenties are very important years in their life because those are the years when you effectively get “set in your ways,” when you develop the habits you’ll be living with for the rest of your life. As I’m a person who’s about to turn 27 whole years old, this has been on my mind lately. I’ve seen lots of people in their older years who are clearly set in their ways or their beliefs, who have no real inclination to change. What works for them will always continue to work for them.

If this is an inevitable part of aging, then I want to set myself in the best ways, with the best habits to carry me forward. Most of these habits will be personal, like taking good care of my health, keeping my body in good shape, and taking good care of my mind. But I think the most important habit for me is to always keep my mind learning, always changing and open to new ideas. That’s the most important habit I think I can develop over the next few years.

Keeping with this theme, I want to keep my mind open with technology as well. Already am I feeling it’s too easy to say “No, I’m not interested in that topic because how I do things works for me already,” mostly when it comes to technical things (I am loathe to use Auto Layout or write software for the Apple Watch). I don’t like to update my operating systems until I can no longer use any apps, because what I use is often less buggy than the newest release.

These habits I’m less concerned about because newer OS releases and newer APIs in many ways seem like a step backwards to me (although I realize this might just be a sign of me already set in my ways!). I’m more concerned about the way I perceive software in a more abstract way.

To me, “software” has always been something that runs as an app or a website, on a computer with a keyboard and mouse. As a longtime user of and developer for smartphones, I know software runs quite well on these devices as well, but it always feels subpar to me. In my mind, it’s hard to do “serious” kinds of work. I know iPhones and iPads can be used for creation and “serious” work, but I also know doing the same tasks typically done on a desktop are much more arduous on a touch screen.

Logically, I know this is a dead end. I know many people are growing up with smartphones as their only computer. I know desktops will seem ancient to them. I know in many countries, desktop computers are almost non-existent. I know there are people writing 3000 word school essays and I know these sorts of things will only increase over time. But it defies my common sense.

There are all kinds of ideas foreign to my common sense coming out of mobile software these days. Many popular apps in China exist in some kind of text messaging / chat user interface and messaging in general is changing the way people are interacting with the companies and services providing their software in other places, too. Snapchat is a service I haven’t even tried to understand, but they are rethinking how videos work on a mobile phone, while John Herrman at The Awl describes a potential app-less near-future.

As long as I hold on to my beliefs that software exists as an app or website on a device with a keyboard and mouse, I’m doomed to living in a world left behind.

I’ve seen it happen to people I respect, too. I love the concept of Smalltalk (and I’ll make smalltalk about Smalltalk to anyone who’ll listen) but I can’t help but feel it’s a technological ideal for a world that no longer exists. In some ways, it feels like we’ve missed the boat on using a computer as a powerful means of expression instead what we got is a convenient means of entertainment.

My point isn’t really about any particular trend. My point is to remind myself that what “software” is is probably always going to remain in flux, tightly related to things like social change or the way of the markets. Software evolves and changes over time, but that evolution doesn’t necessarily line up with progress, it’s just different.

Alan Kay said the best way to predict the future is to invent it. But I think you need to understand that the future’s going to be different, first.

Stalking Your Friends with Facebook Messenger  

Great research by Aran Khanna:

As you may know, when you send a message from the Messenger app there is an option to send your location with it. What I realized was that almost every other message in my chats had a location attached to it, so I decided to have some fun with this data. I wrote a Chrome extension for the Facebook Messenger page (https://www.facebook.com/messages/) that scrapes all this location data and plots it on a map. You can get this extension here and play around with it on your message data. […]

This means that if a few people who I am chatting with separately collude and send each other the locations I share with them, they would be able to track me very accurately without me ever knowing.

Even if you know how invasive Facebook is, this still seems shocking. Why?

These sorts of things always seem shocking because we don’t usually see them in the aggregate. Sending your location one message at a time seems OK, but it’s not until you see all the data together that it becomes scary.

We’re used to seeing data moment to moment, as if looking through a pinhole. We’re oblivious to the larger systems at work, and it’s not until we step up the ladder of abstraction do we start to see a bigger picture.

Say what you will of the privacy issues inherent in this discovery, but I think the bigger problem is our collective inability to understand and defend ourselves from these sorts of systems.

The Plot Against Trains  

Adam Gopnik:

What is less apparent, perhaps, is that the will to abandon the public way is not some failure of understanding, or some nearsighted omission by shortsighted politicians. It is part of a coherent ideological project. As I wrote a few years ago, in a piece on the literature of American declinism, “The reason we don’t have beautiful new airports and efficient bullet trains is not that we have inadvertently stumbled upon stumbling blocks; it’s that there are considerable numbers of Americans for whom these things are simply symbols of a feared central government, and who would, when they travel, rather sweat in squalor than surrender the money to build a better terminal.” The ideological rigor of this idea, as absolute in its way as the ancient Soviet conviction that any entering wedge of free enterprise would lead to the destruction of the Soviet state, is as instructive as it is astonishing. And it is part of the folly of American “centrism” not to recognize that the failure to run trains where we need them is made from conviction, not from ignorance.

Yeah, what good has the American government ever done for America?

The Web and HTTP  

John Gruber repeating his argument that because mobile apps usually use HTTP, that’s still the web:

I’ve been making this point for years, but it remains highly controversial. HTML/CSS/JavaScript rendered in a web browser — that part of the web has peaked. Running servers and client apps that speak HTTP(S) — that part of the web continues to grow and thrive.

But I call bullshit. HTTP is not is not what gives the web its webiness. Sure, it’s a part of the web stack, but so is TCP/IP. The web could have been implemented over any number of protocols and it wouldn’t have made a big difference.

What makes the web the web is the open connections between documents or “apps,” the fact that anybody can participate on a mostly-agreed-upon playing field. Things like Facebook Instant Articles or even Apple’s App Store are closed up, do not allow participation by every person or every idea, and don’t really act like a “web” at all. And they could have easily been built on FTP or somesuch and it wouldn’t make a lick of difference.

It may well be the “browser web” John talks about has peaked, but I think it’s incorrect to say the web is still growing because apps are using HTTP.

The Likely Cause of Addiction Has Been Discovered, and It Is Not What You Think  

Johann Hari:

One of the ways this theory was first established is through rat experiments – ones that were injected into the American psyche in the 1980s, in a famous advert by the Partnership for a Drug-Free America. You may remember it. The experiment is simple. Put a rat in a cage, alone, with two water bottles. One is just water. The other is water laced with heroin or cocaine. Almost every time you run this experiment, the rat will become obsessed with the drugged water, and keep coming back for more and more, until it kills itself.

The advert explains: “Only one drug is so addictive, nine out of ten laboratory rats will use it. And use it. And use it. Until dead. It’s called cocaine. And it can do the same thing to you.”

But in the 1970s, a professor of Psychology in Vancouver called Bruce Alexander noticed something odd about this experiment. The rat is put in the cage all alone. It has nothing to do but take the drugs. What would happen, he wondered, if we tried this differently? So Professor Alexander built Rat Park. It is a lush cage where the rats would have colored balls and the best rat-food and tunnels to scamper down and plenty of friends: everything a rat about town could want. What, Alexander wanted to know, will happen then?

The Best and Worst Places to Grow Up: How Your Area Compares  

New dynamic article by Gregor Aisch, Eric Buth, Matthew Bloch, Amanda Cox and Kevin Quealy for the New York Times about income and geography:

Consider Brooklyn, our best guess for where you might be reading this article. (Feel free to change to another place by selecting a new county on the map or using the search boxes throughout this page.)

The page guesses where you live (at least within America, I don’t know about the rest of the world) and updates its content dynamically to reflect that. The article is what’s best for the reader.

More like this please.

The Eternal Return of BuzzFeed  

Adrienne LaFrance and Robinson Meyer on BuzzFeed and the modern history of media empires:

BuzzFeed is a successful company. And it is not only that: BuzzFeed is the rare example of a news organization that changes the way the news industry works. While it may not turn the largest profits or get the biggest scoops, it is shaping how other organizations sell ads, hire employees, and approach their work. BuzzFeed is the most influential news organization in America today because the Internet is the most influential medium—and, in some crucial ways, BuzzFeed demonstrates an understanding of that medium better than anyone else. […]

Time’s success sprang from a content innovation matched with a keen bet on demography. Its target audience was the average Fitzgerald protagonist, or, at least, his classmate. “No publication has adapted itself to the time which busy men are able to spend on simply keeping informed,” wrote the magazine’s two founders in a manifesto. It was for this audience, too, that the magazine mixed its reports on global affairs with briefs on culture, fashion, business, and politics. The overall feel of a Time issue was a feeling of omniscience: “Now, you, young man of industry, know it all.”

“Just For Girls?” How Gender Divisions Trickle Down  

Chuck Wendig on DC Comics’ new girl superheroes:

All the toxicity between the gender divide? It starts here. It starts when they’re kids. It begins when you say, “LOOK, THERE’S THE GIRL STUFF FOR THE GIRLS OVER THERE, AND THE BOY STUFF FOR THE BOYS OVER HERE.” And then you hand them their pink hairbrushes and blue guns and you tell your sons, “You can’t play with the pink hairbrush because GIRL GERMS yucky ew you’re not weird are you, those germs might make you a girl,” and then when the boy wants to play with the hairbrush anyway, he does and gets his ass kicked on the bus and gets called names like sissy or pussy or some homophobic epithet because parents told their kids that girl stuff is for girls only, which basically makes the boy a girl. And the parents got that lesson from the companies that made the hairbrush because nowhere on the packaging would it ever show a boy brushing hair or a girl brushing a boy’s hair. And on the packaging of that blue gun is boys, boys, boys, grr, men, war, no way would girls touch this stuff. Duh! Girls aren’t boys! No guns for you. […]

Now, this runs the risk of sounding like the plaintive wails of a MAN SPURNED, wherein I weep into the open air, “WHAT ABOUT ME, WHAT ABOUT US POOR MENS,” and that’s not my point, I swear. I don’t want DC or the toy companies to cater to my boy. I just don’t want him excluded from learning about and dealing with girls. I want society to expect him to actually learn about girls and be allowed to like them — not as romantic targets later in life, but as like, awesome ass-kicking complicated equals. As real people who are among him rather than separate from him.

More like this please.

Do Not Disturb

Daring Fireball recently linked to a piece by Steven Levy on “The Age of Notifications,” where Levy describes our current state/spate of notifications and the Apple Watch:

This was delivered to me in the standard message format, no different than a New York Times alert informing me a building two blocks from my apartment has exploded, or an iChat message that my sister is desperately trying to reach me. Please note that I am not a blood relative of B.J. — sorry, Melvin — Upton, nor am I even a fan of the Atlanta Braves. In other words…this could have waited. Nonetheless, MLB.com At Bat apparently deemed this important enough to broadcast to hundreds of thousand of users who had earlier clicked, with hardly a second thought, on a dialogue box asking if they wanted to receive notifications from Major League Baseball. No matter what these users were doing — enduring a meeting, playing basketball, presenting to a book club, daydreaming, watching a movie, enjoying a family meal, painting their masterpiece, proposing marriage, interviewing a job candidate, having sex, or any combination thereof — the news of The Melvin Renaming (the next Robert Ludlum novel?) penetrated their individual radars, urging them to Look at me! Now! Even if they kept the phone stashed, the simple fact that there was an alert burrowed in their brains, keeping them just a little off balance until they finally picked up the phone to discover what the buzz was about.

The Melvin Renaming was just one interruption among billions in what now is unquestionably the Age of Notifications. As our reliance on electronically delivered information has increased, the cascade of brief urgent pointers to that information has been funneled into our devices, lighting our lock screens with these brief dispatches. Rarely does an app neglect to ask you to opt-in to these messages. Most often — since you see the dialogue box when you are entering your honeymoon stage with the app, just after consummation — you say yes. […]

So what’s the solution? We need a great artificial intelligence effort to comb through our information, assess the urgency and relevance, and use a deep knowledge of who we are and what we think is important to deliver the right notifications at the right time. As time goes on, we will trust such a system to effectively filter all our information and dole it out just as needed.

Gruber adds:

I think he’s on to something here: some sort of AI for filtering notification does seem useful. I can imagine helping it by being able to give (a) a thumbs-down to a notification that went through to your watch that you didn’t want to see there; and (b) a thumbs-up to a notification on your phone or PC that wasn’t filtered through to your more personal devices but which you wish had been.

But: this sounds too much like spam filtering to me. True spam is unasked-for. Notifications are all things for which you explicitly opted in, and can opt out of at any moment.

First of all, I think it sounds effectively like spam filtering because these notifications are effectively like spam. Although we technically opt in to them, we’re often coerced into doing so. As Levy said in the quoted passage, we’re often asked at a time when we’re feeling good about the app (after first downloading it, or after accomplishing a task; yes, developers opportunistically pop these up to get more people to agree to them). App developers know when is best to get you to agree, and they know notifications are an effective communication channel for “engaging” (i.e., advertising to) you.

These notifications are kind of like junk food. They’re delicious but dangerous. A little bit is fine, but too much is bad for you. While you can say junk food junkies are “opting in” to eating the unhealthy food, are they really making a choice? Or is the food literally irresistible to them?

Secondly, if this recent interview in Wired is to be believed, a deluge of notifications is one of the primary motivations for the development of the Apple Watch. Am I expected to pay $350+ in order to cut the annoyances of my $600+ iPhone? Wouldn’t it just be simpler to turn off the notifications (i.e., all of them) instead of throwing more technology on the problem?

We shouldn’t have to force (or shame) people into some false sense of virtuosity (“she’s so extreme, she doesn’t allow any notifications!”) just so they’re not constantly disturbed by buzzes and animating notifications.

Start Thinking About Force Touch in iOS Today

It seems very likely Apple’s Force Touch technology (with its sister Taptic feedback engine) will come to a future iPhone, possibly whichever iPhone launches in the Fall of 2015. Like the recently launched MacBooks, the new iPhone will probably include APIs for your apps to take advantage of.

I’m imploring you to start thinking right now, today, about how you’re going to use these APIs in your applications.

So it goes

When Apple adds a new system-wide API to iOS, here’s how it usually goes: everybody adds some minor feature to their app thoughtlessly using the new API and the API becomes overused or misused.

Let’s look at Notifications. There are so many apps using notifications that shouldn’t be. Apps notify you about likes and comments. Apps notify you about downloads starting and downloads ending. Apps beg you to come back and use them more. Notifications, which were intended to notify you about important things instead have become a way for apps to shamelessly advertise themselves at their own whim.

Let’s look at a less nefarious feature: Sharing. Apple introduced the “Sharing” features in iOS 7: a common interface for sharing app content to social networks. This feature is used everywhere. Your browser has it, your social apps have it, your games have it, your programming environments have it.

Another example, let’s look at Air Drop: a feature designed to shared data between devices. This feature is used in all kinds of apps it shouldn’t be, like the New York Times app. How many apps have Today extensions? How many badge their icons? How many ask for your location or show a map?

The point of the above examples isn’t to argue the moral validity of their API use, but instead that these APIs are introduced by Apple, then app developers scramble to find ways to use these features in their apps, whether or not it really makes sense to do so. App developers may occasionally do so because it’s an important feature for their application, but often it seems developers use the APIs because Apple is more likely to promote apps using them or because the developers just think it’s neato.

This is something I’d like to avoid with Force touch APIs.

Force Touch

If we look to Apple for examples on how to use Force Touch in our applications, their usage has been pretty tame and uninspired so far. Most uses on their Force Touch page for the MacBook use Force Touch as a way of bringing up a contextual menu or view. For the “Force Click” feature, Apple describes features like:

looking up the definition of a word, previewing a file in the Finder, or creating a new Calendar event when you Force click a date in the text of an email.

You can do better in your apps. One way to think about force click is to think of it as an analogy for hovering on desktop computers (if I had my druthers, we’d use today’s “touch” as a hover gesture and we’d use force click as the “tap” or action gesture). Force click and hover are a little different, of course, and it’s your job to pay attention to these differences. Force click is less about skimming and more about confirming (again, my druthers and touch states!). How can your applications more powerfully let people explore and see information?

I wouldn’t look at hover functionality and just literally translate it using force click, but I would look at the kinds of interactions both can afford you. Hover can show tooltips, sure, but it can also be an ambient way to graze information. Look at how one skims an album in iPhoto (RIP) to see its photos at a glance. Look at how hovering over any data point in this visualization highlights related data (the data itself isn’t important, it’s to illustrate a usage of hover).

Pressure sensitivity as an input mechanism is a little more straightforward. You’ll presumably get continuous input in the range of 0 to 1 telling you how hard a finger is pressed and you react accordingly. Apple gives the example of varying pen thickness, but what else can you do? I’d recommend looking to video games for inspiration as they’ve been using this form of analog input for decades. Any game using a joystick or pressable shoulder triggers is a good place to start. Think about continuous things (pan gestures, sure, but also how your whole body moves, how you breathe, how you live) and things with a range (temperature, size, scale, sentiment, and, well, pressure). How can you use these in tandem with the aforementioned “hovering” scenarios?

If you want to get a head start on prototyping interactions, you can cheat by either programming on one of the new MacBooks, or you can use a new iOS 8 API on UITouch called majorRadius. This gives you an approximation of how “big” a touch was, which you can use as a rough estimate of “how hard” a finger was pressed (this probably isn’t reliable enough to ship an app with, but you can likely get a somewhat rough sense of how your interactions could work in a true pressure-sensitive environment).

Not every app probably needs Force touch or click, but that probably won’t stop people from abusing it in Twitter and photo sharing apps. If you really care about properly using these new forms of interaction, then start thinking about how to do it right, today. There is decades-worth of research and papers about this topic. Think about why hands are important. Read, think, design, and prototype. These devices are probably coming sooner than we think, so we should start thinking now on how to set a high bar for future interaction. Don’t relegate this feature to thoughtless context menus, use it as a way to add more discrete and explorable control to the information in your software.

Ai Weiwei is Living in Our Future  

A terrifying, despotic essay by Hans de Zwart about the science-non-fiction world we’re living in.

The algorithms for facial recognition are getting better every day too. In another recent news story we heard how Neil Stammer, a juggler who had been on the run for 14 years, was finally caught. How did they catch him? An agent who was testing a piece of software for detecting passport fraud, decided to try his luck by using the facial recognition module of the software on the FBI’s collection of ‘Wanted’ posters. Neil’s picture matched the passport photo of somebody with a different name. That’s how they found Neil, who had been living as an English teacher in Nepal for many years. Apparently the algorithm has no problems matching a 14 year old picture with a picture taken today. Although it is great that they’ve managed to arrest somebody who is suspected of child abuse, it is worrying that it doesn’t seem like there are any safeguards making sure that a random American agent can’t use the database of pictures of suspects to test a piece of software.

Knowing this, it should come as no surprise that we have learned from the Snowden leaks that the National Security Agency (NSA) stores pictures at a massive scale and tries to find faces inside of them. Their ‘Wellspring’ program checks emails and other pieces of communication and shows them when it thinks there is a passport photo inside of them. One of the technologies the NSA uses for this feat is made by Pittsburgh Pattern Recognition (‘PittPatt’), now owned by Google. We underestimate how much a company like Google is already part of the military industrial complex. I therefore can’t resist showing a new piece of Google technology: the military robot ‘WildCat’ made by Boston Dynamics which was bought by Google in December 2013 […]

It is not only the government who is following us and trying to influence our behavior. In fact, it is the standard business model of the Internet. Our behaviour on the Internet is nearly always mediated by a third party. Facebook and WhatsApp sit between you and your best friend, Spotify sits between you and Beyoncé, Netflix sits between you and Breaking Bad and Amazon sits between you and however many Shades of Grey. The biggest commercial intermediary is Google who by now decides, among other things how I walk from the station to the theatre, in which way I will treat the symptoms of my cold, whether an email I’ve sent to somebody else should be marked as spam, where best I can book a hotel, and whether or not I have an appointment next week Thursday. […]

The casinos were the first industry to embrace the use of AEDs (automatic defibrillators). Before they started using them, the ambulance staff was usually too late whenever somebody had a heart attack: they are only allowed to use the back entrance (who will enter a casino when there is an ambulance in front of the door?) and casinos are purposefully designed so that you easily lose your way. Dow Schüll describes how she is with a salesperson for AEDs looking at a video of somebody getting a heart attack behind a slot machine. The man falls off his stool onto the person sitting next to him. That person leans to the side a little so that the man can continue his way to the ground and plays on. While the security staff is sounding the alarm and starts working the AED, there is literally nobody who is looking up from their gambling machine, everybody just continues playing.

This sort of reminds me of the feeling I often have when people around me are busy with Facebook on their phone. The feeling that it makes no difference what I do to get the person’s attention, that all of their attention is captured by Facebook. We shouldn’t be surprised by that. Facebook is very much like a virtual casino abusing the same cognitive weaknesses as the real casinos. The Facebook user is seen as an ‘asset’ of which the ‘time on service’ has to be made a long as possible, so that the ‘user productivity’ is as high as possible. Facebook is a machine that seduces you to keep clicking on the ‘like’ button.

It’s not just Facebook, either. I feel that way about almost all smartphone apps and social networks, too. Your attention is the currency.

Are You Working in a Start-up or Are You in Jail?  

Niree Noel:

Do you need security clearance to enter and exit the facility?

Are you surrounded by windows that don’t open?

Are you dressed the same as everyone else? Using the same stuff as everyone else? At about the same time?

I’m Brianna Wu, And I’m Risking My Life Standing Up To Gamergate  

Brianna Wu:

This weekend, a man wearing a skull mask posted a video on YouTube outlining his plans to murder me. I know his real name. I documented it and sent it to law enforcement, praying something is finally done. I have received these death threats and 43 others in the last five months.

This experience is the basis of a Law & Order episode airing Wednesday called the “Intimidation Game.” I gave in and watched the preview today. The main character appears to be an amalgamation of me, Zoe Quinn, and Anita Sarkeesian, three of the primary targets of the hate group called GamerGate.

My name is Brianna Wu. I develop video games for your phone. I lead one of the largest professional game-development teams of women in the field. Sometimes I speak out on women in tech issues. I’m doing everything I can to save my life except be silent.

The week before last, I went to court to file a restraining order against a man who calls himself “The Commander.” He made a video holding up a knife, explaining how he’ll murder me “Assassin’s Creed Style.” He wrecked his car en route to my house to “deliver justice.” In logs that leaked, he claimed to have weapons and a compatriot to do a drive-by.

Awful, disturbing stuff.

I’ll also remind you you don’t have to be making death and rape threats to be a part of sexism in tech. The hatred of women has got to stop.

See also the “top stories” at the bottom of essay. This representation of women only obsessed with their looks is pretty toxic to both men and women, too.

The Dynabook and the App Store

Yesterday I linked to J. Vincent Toups’ 2011 Duckspeak Vs Smalltalk, an essay about how far, or really how little, we’ve come since Alan Kay’s Dynabook concept, and a critique of the limitations inherent in today’s App Store style computing.

A frequent reaction to this line of thought is “we shouldn’t make everyone be a programmer just to use a computer.” In fact, after Loren Brichter shared the link on Twitter, there were many such reactions. While I absolutely agree abstractions are a good thing (e.g., you shouldn’t have to understand how electricity works in order to turn on a light), one of the problems with computers and App Stores today is we don’t even have the option of knowing how the software works even if we wanted.

But the bigger problem is what our conception of programming is today. When the Alto computer was being researched at Xerox, nobody was expecting people to program like we do today. Javascript, Objective C, and Swift (along with all the other “modern” languages today) are pitiful languages for thinking, and were designed instead for managing computer resources (Javascript, for example, was thoughtlessly cobbled together in just ten days). The reaction of “people shouldn’t have to program to use a computer” hinges on what it means to program, and what software developers think of programming is vastly different from what the researchers at Xerox had in mind.

Programming, according to Alan Kay and the gang, was a way for people to be empowered by computers. Alan correctly recognized the computer as a dynamic medium (the “dyna” in “Dynabook") and deemed it crucial people be literate with this medium. Literacy, you’ll recall, means being able to read and write in a medium, to be able to think critically and reason with a literature of great works (that’s the “book” in “Dynabook"). The App Store method of software essentially neuters the medium into a one-way consumption device. Yes, you can create on an iPad, but the system’s design language does not allow for creation of dynamic media.

Nobody is expecting people to have to program a computer in order to use it, but the PARC philosophy has at its core a symmetric concept of creation as well as consumption. Not only are all the parts of Smalltalk accessible to any person, but all the parts are live, responsive, active objects. When you need to send a live, interactive model to your colleague or your student, you sent the model, not an attachment, not a video or a picture, but the real live object. When you need to do an intricate task, you don’t use disparate “apps” and pray the developers have somehow enabled data sharing between, but you actually combine the parts yourself. That’s the inherent power in the PARC model that we’ve completely eschewed in modern operating systems.

Smalltalk and the Alto were far from perfect, and I’ll be the last to suggest we use them as is. But I will suggest we understand the philosophy and the desires to empower people with computers and use that understanding to build better systems. I’d highly recommend reading Alan’s Early History of Smalltalk and A Personal Computer for Children of All Ages to learn what the personal computer was really intended to be.