Hopscotch and Mental Models
On May 8 2014, after many long months of work, we finally shipped Hopscotch 2.0, which was a major redesign of the app. Hopscotch is an interactive programming environment on the iPad for kids 8 and up, and while the dedicated learners used our 1.0 with great success, we wanted to make Hopscotch more accessible for more kids who may have otherwise struggled. Early on, I pushed for a rethinking of the mental model we wanted to present our programmers so they could better grasp the concept. While I pushed some of the core ideas, this was of course a complete team effort. Every. Single. Member. of our (admittedly small!) team contributed a great deal over many long discussions and long days building the app.
What follows is an examination of mental models, and the models used in various versions of Hopscotch.
Mental models
The human brain is 100000 year old hardware we’re relatively stuck with. Applications are software created by 100000 year old hardware. I don’t know which is scarier, but I do know it’s a lot easier to adapt the software than it is to adapt the hardware. A mental model is how you adapt your software to the human brain.
A mental model is a device (in the “literary device” sense of the word) you design for humans to use, knowingly or not, to better grasp concepts and accomplish goals with your software. Mental models work not by making the human “play computer” but by making the computer “play human,” thus giving the person a conceptual framework to think in while using the software.
The programming language Logo uses the mental model of the Turtle. When children program the Turtle to move in a circle, they teach it in terms of how they would move in a circle (“take a step, turn a bit, over and over until you make a whole circle”) (straight up just read Mindstorms).
There are varying degrees of success in a program’s mental model, usually correlating to the amount of thought the designers put into the model itself. A successful mental model results in the person having a strong connection with the software, where a weak mental model leaves people confused. The model of hierarchical file systems (e.g., “files and folders”) has long been a source of consternation for people because it forces them to think like a computer to locate information.
You may know your application’s mental model very intimately because you created it but most people will not be so fortunate when they start out. The easiest way to understand your application’s mental model is by having smaller leaps to make—for example, most iPhone apps are far more alike than they are different—so people they don’t have to tread too far into the unknown.
One of the more effective tricks we employ in graphical user interfaces is the spatial analogy. Views push and pop left and right on the screen, suggesting the application exists in a space extending beyond the bounds of the rectangle we stare at. Some applications offer a spatial analogy in terms of a zooming interface, like a Powers of Ten but for information (or “thought vectors in concept space”, to quote Engelbart) (see Jef Raskin’s The Humane Interface for a thorough discussion on ZUIs).
These spatial metaphors can be thought of as gestures in the Raskinian sense of the term (defined as “…an action that can be done automatically by the body as soon as the brain ‘gives the command’. So Cmd+Z is a gesture, as is typing the word ‘brain’”) where instead of acting, the digital space provides a common, habitual environment for performing actions. There is no Raskinian mode switch because the person already has familiarity with the space.
Hopscotch 1.x
Following in the footsteps of the Logo turtle, Hopscotch characters are programmed in the same egocentric mental model (here’s a video of programming one character in Hopscotch 1.0). If I want Bear to move in a circle, I first ponder how I would move in a circle and translate this to Hopscotch blocks. If this were all to the story, this mental model would be pretty sufficient. But Hopscotch projects can have multiple programmed characters executing at the same time. Logo’s model works well because it’s clear there is one turtle to one programmer, but when there are multiple characters to take care of, it’s conceptually more of a stretch to program them all this way.
Hopscotch 1.0 was split diametrically between the drag and drop code blocks for various the various characters in your project and the Stage, the area where your program executes and your characters wiggle their butts off, as directed. This division is quite similar to the “write code / execute program” model most programming environments provide developers, but that doesn’t mean it’s appropriate (for children or professionals). Though the characters were tangible (tappable!) on the Stage, they remained abstract in the code editor. Simply put, there wasn’t a strong connection between your code and your program. This discord made it very difficult for beginners to connect their code to their characters.
Hopscotch 2.x
In the redesign, we unified the Stage-as-player with the Stage-as-Editor. In the original version, you programmed your characters by switching between tabs of code, but in the redesign you see your characters as they appear on the Stage. Gone are the two distinct modes, instead you just have the Stage, which you can edit. This means you no longer position characters with a small graph, but instead pick up your characters and place them directly.
The code blocks, which used to live in the tabs now lives inside the characters themselves. This gives a stronger mental model of “My character knows how to draw a circle because I programmed her directly”. When you tap a character you see a list of “Rules” appear as thought bubbles beside the character. Rules are mini-programs for each character that are played for different events (e.g., “When the iPad is tilted, make Bear dance") and you edit their code by tapping into them. This concept attaches the abstract concept of “code” to the very spatial and tangible characters you’re trying to program, and we found beginners could grasp this concept much quicker than the original model.
Along the way, we added “little things” like custom functions and a mini code preview that highlights code blocks as it executes, to let programmers quickly see the results of their changes for the character they’re programming. These aren’t additions to the mental model per se, but they do help close the gap between abstract code and your characters following your program.
Our redesign got a lot of love. We were featured by Apple, Recode, and even the Grubers gave us some effusive love. It wasn’t perfect, and we’ve continued to work on the design in the intervening months, but it’s been a big improvement.
A mental framework
A strong mental model benefits the people using your software because it helps them meet both ways. But mental models also help you as a designer to understand the messages you send through your application. By rethinking our mental model for Hopscotch, we dramatically improved both how we build the program and how people use it, and it’s given us a framework to think in for the future. As you build or use applications, be aware of the signals you send and receive and it will help you understand the software better.