There is a constant debate between web developers and native application developers on which platform is “better”, where, as you might expect, the definition of “better” varies greatly depending on your perspective.
Native app developers believe their software is better because they have more integration with the host platform: they get access to the user’s computer and things like drag and drop, or a tighter integration with the user’s information, like Calendars or Contacts. These applications also benefit from better performance, as the programs typically run natively, as opposed to being run interpreted by a web browser. Web applications will always be playing catch-up, according to some.
Web application developers believe their software is better because it can reach users on every platform and operating system. They don’t have to specify for only users of Macs or PCs or phones or tablets. Every user gets more or less the same experience. These applications also benefit from the nature of their environment: they actually exist running on controlled web servers instead of on the user’s local machine. The important consequence of this is software developers can rapidly change and improve the application without users having to take any action whatsoever. They simply visit the page and they’re always viewing the most recent version of the application.
I’ve been a native application developer for many years now and I’ve always preferred it for the aforementioned reasons, but lately I’ve been starting to see more of its flaws and fewer of its benefits. I’ve been looking at how human creativity works, and more importantly, what impedes it. And the common thread I’ve seen in all this research is a delay in seeing results of creation seriously impedes that creation.
The more important benefit is however at a higher level. As a developer, there’s no impediment to getting new versions of my code out to users. I simply write the code, and when it’s been tested enough, I can deploy the fixes to my users. They don’t have to update anything, they just always get the most recent version of my application. Github illustrates this wonderfully, as they ship new code on the daily. “What version of Github are you using?” The current version.
Paul Graham wrote about this for his LISP-based web application from the nineties:
When one of the customer support people came to me with a report of a bug in the editor, I would load the code into the Lisp interpreter and log into the user’s account. If I was able to reproduce the bug I’d get an actual break loop, telling me exactly what was going wrong. Often I could fix the code and release a fix right away. And when I say right away, I mean while the user was still on the phone.
In the old days, computer programs were written on punch cards which were fed into the computer, tediously, for the machine to execute them. It wasn’t until hours later when the results of the program were printed back out to the programmer. There was a big delay between the programmer writing code and there being a solution to his or her problem. How barbaric.
These days, there’s a smaller delay between the programmer writing the code and seeing the result of the execution, but there’s still an immense delay, for native applications, between when the programmer writes the code and when the user sees the result. Our native applications are still shipped as though they’re printed onto some physical artifact, which must be moved through space — at the expense of time — to a customer. This was necessary for punch cards and it was necessary for floppies and CD-ROMS, but it’s no longer true in the age of the internet.
Shipping native applications, even in the best case, is almost always a slow process. There are long development cycles with tons of testing needed before the application can be shipped. And then, there’s a struggle to get users to update their applications to the latest version.
I think there are a few reasons why users don’t update their native applications:
- Because updates ship so infrequently, they usually involve many changes which break things.
- Because it’s tedious, mechanical, shit work they probably shouldn’t be doing. It should just be done for them.
- Because even if they wanted to, they often don’t know how.
I think #1 is the biggest culprit for those in the know. Experienced users have unfortunately experienced many poor upgrade experiences. But the experiences are so poor because the updates were so big and contained so many changes. And the updates were so big because users so infrequently update their software. It’s a vicious cycle and it needs to be broken.
The problem gets compounded when working with Apple’s App Stores, where even if developers wanted to ship on a regular basis, they have absolutely no power to do so. Instead, they’ve got to wait usually a week or more between shipping code and people being able to use it. Not only that, but while they’re waiting, they can’t ship any incremental changes lest they have to start the waiting period all over again. It really sucks.
I’m not entirely sold on web development as the one true way forward, but I do admit I admire many of the benefits such an environment provides. I want native development to learn its lessons. I want to ship software as frequently as I can. I want my users to feel like the users of Chrome or Chocolat, applications whose updates happen so frequently it’s basically invisible. If we could update native applications multiple times per week, it would become the norm. Update problems would be reduced because changes would be smaller and bugs would be easier to track down. And users would benefit most of all because they’d no longer be required to do anything — they’d just always get the best software.