Google’s Advanced Technology and Projects Demonstrations

This week during Google I/O, we were given glimpses of some of the company’s ATAP projects. The two projects, both accompanied by short videos, focus on new methods of physical interaction.

Jacquard (video) is “a new system for weaving technology into fabric, transforming everyday objects, like clothes, into interactive surfaces.” This allows clothing to effectively become a multitouch surface, presumably to control nearby computers like smartphones or televisions.

Soli (video) is “a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects.” The chip recognizes small gestures made with your fingers or hands.

Let’s assume the technology shown in each demo works really well, which is certainly possible given Google’s track record for attracting incredible technical talent. It seems very clear to me Google has no idea what to do with these technologies, or if they do, they’re not saying. The Soli demo has people tapping interface buttons in the air and the Jacquard demo has people multitouching their clothes to scroll or make a phone call. Jacquard project founder Ivan Poupyrev even says it “is a blank canvas and we’re really excited to see what designers and developers will do with it.”

This is impressive technology and an important hardware step towards the future of interaction, but we’re getting absolutely no guidance on what this new kind of interaction should actually be, or why we’d use it. And the best we’re shown is a poor imitation of old computer interfaces. We’re implicitly being told existing computer interfaces are definitively the way we should manipulate the digital medium. We’re making an assumption and acting as if it were true without actually questioning it.

Emulating a button press or a slider scroll is not only disappointing but it’s also a step backwards. When we lose the direct connection with the device graphics being manipulated, the interaction becomes a weird remote control with no remote to tell us we’ve even made a click. This technology is useless if all we do with it is poorly emulate our existing steampunk interfaces of buttons and knobs and levers and sliders.

If you want inspiration for truly better human computer interfaces, I highly suggest checking out non-digital artists and craftspeople and their tools. Look at how a painter or an illustrator works. What does their environment look like? What tools do they have and how do they use them? How do they move their tools and what is the outcome of that? How much freedom do their tools afford them?

Look to musicians to see an expressive harmony between player and instrument. Look at the range of sound, volume, tempo a single person and single instrument can make. Look at how the hands and fingers are used, how the mouth and lungs are used, how the eyes are used. Look at how the instruments are positioned relative to the player’s body and relative to other players.

Look at how a dancer moves their body. Look at how every bone and muscle and joint is a possible degree of freedom. Look at how precise the movement can be controlled, how many possible poses and formations within a space. Look at how dancers interplay with each other, with the space, with the music, and with the audience.

And then look at the future being sold to you. Look at your hand outstretched in front of a smartphone screen lying on a table. Look at your finger and thumb clicking a pretend button to dismiss a dialog box. Look at your finger gliding over your sleeve to fast-forward a movie you’re watching on Netflix.

Is this the future you want? Do you want to twiddle your thumbs or do you want to dance with somebody?

Speed of Light