It seems very likely Apple’s Force Touch technology (with its sister Taptic feedback engine) will come to a future iPhone, possibly whichever iPhone launches in the Fall of 2015. Like the recently launched MacBooks, the new iPhone will probably include APIs for your apps to take advantage of.
I’m imploring you to start thinking right now, today, about how you’re going to use these APIs in your applications.
So it goes
When Apple adds a new system-wide API to iOS, here’s how it usually goes: everybody adds some minor feature to their app thoughtlessly using the new API and the API becomes overused or misused.
Let’s look at Notifications. There are so many apps using notifications that shouldn’t be. Apps notify you about likes and comments. Apps notify you about downloads starting and downloads ending. Apps beg you to come back and use them more. Notifications, which were intended to notify you about important things instead have become a way for apps to shamelessly advertise themselves at their own whim.
Let’s look at a less nefarious feature: Sharing. Apple introduced the “Sharing” features in iOS 7: a common interface for sharing app content to social networks. This feature is used everywhere. Your browser has it, your social apps have it, your games have it, your programming environments have it.
Another example, let’s look at Air Drop: a feature designed to shared data between devices. This feature is used in all kinds of apps it shouldn’t be, like the New York Times app. How many apps have Today extensions? How many badge their icons? How many ask for your location or show a map?
The point of the above examples isn’t to argue the moral validity of their API use, but instead that these APIs are introduced by Apple, then app developers scramble to find ways to use these features in their apps, whether or not it really makes sense to do so. App developers may occasionally do so because it’s an important feature for their application, but often it seems developers use the APIs because Apple is more likely to promote apps using them or because the developers just think it’s neato.
This is something I’d like to avoid with Force touch APIs.
If we look to Apple for examples on how to use Force Touch in our applications, their usage has been pretty tame and uninspired so far. Most uses on their Force Touch page for the MacBook use Force Touch as a way of bringing up a contextual menu or view. For the “Force Click” feature, Apple describes features like:
looking up the definition of a word, previewing a file in the Finder, or creating a new Calendar event when you Force click a date in the text of an email.
You can do better in your apps. One way to think about force click is to think of it as an analogy for hovering on desktop computers (if I had my druthers, we’d use today’s “touch” as a hover gesture and we’d use force click as the “tap” or action gesture). Force click and hover are a little different, of course, and it’s your job to pay attention to these differences. Force click is less about skimming and more about confirming (again, my druthers and touch states!). How can your applications more powerfully let people explore and see information?
I wouldn’t look at hover functionality and just literally translate it using force click, but I would look at the kinds of interactions both can afford you. Hover can show tooltips, sure, but it can also be an ambient way to graze information. Look at how one skims an album in iPhoto (RIP) to see its photos at a glance. Look at how hovering over any data point in this visualization highlights related data (the data itself isn’t important, it’s to illustrate a usage of hover).
Pressure sensitivity as an input mechanism is a little more straightforward. You’ll presumably get continuous input in the range of 0 to 1 telling you how hard a finger is pressed and you react accordingly. Apple gives the example of varying pen thickness, but what else can you do? I’d recommend looking to video games for inspiration as they’ve been using this form of analog input for decades. Any game using a joystick or pressable shoulder triggers is a good place to start. Think about continuous things (pan gestures, sure, but also how your whole body moves, how you breathe, how you live) and things with a range (temperature, size, scale, sentiment, and, well, pressure). How can you use these in tandem with the aforementioned “hovering” scenarios?
If you want to get a head start on prototyping interactions, you can cheat by either programming on one of the new MacBooks, or you can use a new iOS 8 API on
majorRadius. This gives you an approximation of how “big” a touch was, which you can use as a rough estimate of “how hard” a finger was pressed (this probably isn’t reliable enough to ship an app with, but you can likely get a somewhat rough sense of how your interactions could work in a true pressure-sensitive environment).
Not every app probably needs Force touch or click, but that probably won’t stop people from abusing it in Twitter and photo sharing apps. If you really care about properly using these new forms of interaction, then start thinking about how to do it right, today. There is decades-worth of research and papers about this topic. Think about why hands are important. Read, think, design, and prototype. These devices are probably coming sooner than we think, so we should start thinking now on how to set a high bar for future interaction. Don’t relegate this feature to thoughtless context menus, use it as a way to add more discrete and explorable control to the information in your software.